Jan 29 16:21:55 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 16:21:55 crc restorecon[4697]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:55 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:21:56 crc restorecon[4697]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 16:21:58 crc kubenswrapper[4886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:58 crc kubenswrapper[4886]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 16:21:58 crc kubenswrapper[4886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:58 crc kubenswrapper[4886]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:58 crc kubenswrapper[4886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:21:58 crc kubenswrapper[4886]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.241609 4886 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246792 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246826 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246837 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246846 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246855 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246895 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246906 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246915 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246923 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246930 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246939 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246947 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246955 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246963 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246970 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246977 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246985 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.246993 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247000 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247009 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247016 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247025 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247032 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247040 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247048 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247055 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247062 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247071 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247079 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247087 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247094 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247102 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247110 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247117 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247126 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247134 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247141 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247150 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247160 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247170 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247179 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247187 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247195 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247203 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247211 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247220 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247229 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247237 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247244 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247253 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247260 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247272 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247281 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247289 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247297 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247304 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247312 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247319 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247351 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247360 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247367 4886 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247376 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247386 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247395 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247408 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247417 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247427 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247437 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247444 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247454 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.247464 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247652 4886 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247681 4886 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247703 4886 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247718 4886 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247732 4886 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247742 4886 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247754 4886 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247766 4886 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247777 4886 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247786 4886 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247796 4886 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247806 4886 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247815 4886 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247824 4886 flags.go:64] FLAG: --cgroup-root="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247833 4886 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247842 4886 flags.go:64] FLAG: --client-ca-file="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247851 4886 flags.go:64] FLAG: --cloud-config="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247859 4886 flags.go:64] FLAG: --cloud-provider="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247868 4886 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247879 4886 flags.go:64] FLAG: --cluster-domain="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247887 4886 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247896 4886 flags.go:64] FLAG: --config-dir="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247905 4886 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247915 4886 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247926 4886 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247935 4886 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247944 4886 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247953 4886 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247962 4886 flags.go:64] FLAG: --contention-profiling="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247972 4886 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247980 4886 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247989 4886 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.247998 4886 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248009 4886 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248018 4886 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248027 4886 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248036 4886 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248044 4886 flags.go:64] FLAG: --enable-server="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248053 4886 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248066 4886 flags.go:64] FLAG: --event-burst="100" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248075 4886 flags.go:64] FLAG: --event-qps="50" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248084 4886 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248093 4886 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248102 4886 flags.go:64] FLAG: --eviction-hard="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248112 4886 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248121 4886 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248130 4886 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248140 4886 flags.go:64] FLAG: --eviction-soft="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248149 4886 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248158 4886 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248167 4886 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248203 4886 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248212 4886 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248221 4886 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248243 4886 flags.go:64] FLAG: --feature-gates="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248254 4886 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248263 4886 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248272 4886 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248281 4886 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248291 4886 flags.go:64] FLAG: --healthz-port="10248" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248356 4886 flags.go:64] FLAG: --help="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248365 4886 flags.go:64] FLAG: --hostname-override="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248374 4886 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248383 4886 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248392 4886 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248401 4886 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248410 4886 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248419 4886 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248428 4886 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248436 4886 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248445 4886 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248454 4886 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248463 4886 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248472 4886 flags.go:64] FLAG: --kube-reserved="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248481 4886 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248490 4886 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248500 4886 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248509 4886 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248517 4886 flags.go:64] FLAG: --lock-file="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248526 4886 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248535 4886 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248544 4886 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248573 4886 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248583 4886 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248592 4886 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248601 4886 flags.go:64] FLAG: --logging-format="text" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248610 4886 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248620 4886 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248628 4886 flags.go:64] FLAG: --manifest-url="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248637 4886 flags.go:64] FLAG: --manifest-url-header="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248648 4886 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248659 4886 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248670 4886 flags.go:64] FLAG: --max-pods="110" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248684 4886 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248693 4886 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248702 4886 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248712 4886 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248721 4886 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248730 4886 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248739 4886 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248758 4886 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248767 4886 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248776 4886 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248786 4886 flags.go:64] FLAG: --pod-cidr="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248794 4886 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248808 4886 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248816 4886 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248826 4886 flags.go:64] FLAG: --pods-per-core="0" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248834 4886 flags.go:64] FLAG: --port="10250" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248843 4886 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248852 4886 flags.go:64] FLAG: --provider-id="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248861 4886 flags.go:64] FLAG: --qos-reserved="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248870 4886 flags.go:64] FLAG: --read-only-port="10255" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248879 4886 flags.go:64] FLAG: --register-node="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248888 4886 flags.go:64] FLAG: --register-schedulable="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248896 4886 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248910 4886 flags.go:64] FLAG: --registry-burst="10" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248920 4886 flags.go:64] FLAG: --registry-qps="5" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248928 4886 flags.go:64] FLAG: --reserved-cpus="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248938 4886 flags.go:64] FLAG: --reserved-memory="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248949 4886 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248958 4886 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248966 4886 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.248976 4886 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249001 4886 flags.go:64] FLAG: --runonce="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249014 4886 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249023 4886 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249032 4886 flags.go:64] FLAG: --seccomp-default="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249041 4886 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249050 4886 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249059 4886 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249068 4886 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249077 4886 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249086 4886 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249095 4886 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249104 4886 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249112 4886 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249121 4886 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249131 4886 flags.go:64] FLAG: --system-cgroups="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249142 4886 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249160 4886 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249171 4886 flags.go:64] FLAG: --tls-cert-file="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249182 4886 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249195 4886 flags.go:64] FLAG: --tls-min-version="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249207 4886 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249217 4886 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249228 4886 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249238 4886 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249249 4886 flags.go:64] FLAG: --v="2" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249273 4886 flags.go:64] FLAG: --version="false" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249300 4886 flags.go:64] FLAG: --vmodule="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249311 4886 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.249320 4886 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249611 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249623 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249651 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249659 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249674 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249682 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249690 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249698 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249706 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249714 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249721 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249733 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249743 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249752 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249760 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249768 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249776 4886 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249784 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249792 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249800 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249807 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249815 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249823 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249830 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249838 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249848 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249858 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249867 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249876 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249885 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249893 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249900 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249908 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249918 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249931 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249940 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249951 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249959 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249979 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249987 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.249995 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250003 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250010 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250018 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250027 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250035 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250043 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250050 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250059 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250066 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250074 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250081 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250089 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250097 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250107 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250119 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250130 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250141 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250155 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250176 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250188 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250198 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250210 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250220 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250229 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250237 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250249 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250257 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250267 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250275 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.250282 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.250308 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.264572 4886 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.264926 4886 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265044 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265054 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265061 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265067 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265073 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265078 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265083 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265088 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265093 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265098 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265103 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265108 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265113 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265117 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265124 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265133 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265139 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265144 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265150 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265155 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265160 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265165 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265170 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265176 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265181 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265186 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265190 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265195 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265201 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265208 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265214 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265222 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265228 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265235 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265242 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265249 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265255 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265261 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265266 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265272 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265277 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265282 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265286 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265292 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265297 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265303 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265309 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265314 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265320 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265343 4886 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265349 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265354 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265359 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265364 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265369 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265374 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265379 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265384 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265389 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265394 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265399 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265404 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265408 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265416 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265422 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265427 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265432 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265437 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265442 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265447 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265452 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.265461 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265628 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265638 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265643 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265651 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265780 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265786 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265791 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265797 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265802 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265809 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265815 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265822 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265827 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265833 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265838 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265843 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265848 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265854 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265860 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265867 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265875 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265884 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265892 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265899 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265907 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265912 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265917 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265922 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265927 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265933 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265937 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265942 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265947 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265952 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265957 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265962 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265967 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265974 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265979 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265984 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265989 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265994 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.265998 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266003 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266008 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266013 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266018 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266023 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266028 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266033 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266038 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266043 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266047 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266052 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266056 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266062 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266068 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266073 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266078 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266082 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266087 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266092 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266097 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266103 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266108 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266114 4886 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266119 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266125 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266130 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266136 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.266141 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.266148 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.267343 4886 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.289295 4886 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.289428 4886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.308520 4886 server.go:997] "Starting client certificate rotation" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.308579 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.308942 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-09 06:00:58.132648677 +0000 UTC Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.309167 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.379407 4886 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.381682 4886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.384060 4886 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.409156 4886 log.go:25] "Validated CRI v1 runtime API" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.481735 4886 log.go:25] "Validated CRI v1 image API" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.483390 4886 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.488395 4886 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-16-17-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.488426 4886 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.515815 4886 manager.go:217] Machine: {Timestamp:2026-01-29 16:21:58.50299037 +0000 UTC m=+1.411709642 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f9e02871-746f-4d5e-9d80-7fb23e871a7f BootID:bd8b5dfd-41ae-412b-b205-175b6140aee3 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:48:85:bf Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:48:85:bf Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2f:87:de Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8e:72:58 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7d:b7:11 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8d:02:b9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:b5:ad:5c:d9:b0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:3a:09:fa:cd:d5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.516135 4886 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.516303 4886 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.517387 4886 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.517586 4886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.517635 4886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.517855 4886 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.517868 4886 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.518195 4886 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.518238 4886 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.518455 4886 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.518538 4886 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.523384 4886 kubelet.go:418] "Attempting to sync node with API server" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.523408 4886 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.523433 4886 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.523451 4886 kubelet.go:324] "Adding apiserver pod source" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.523468 4886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.529859 4886 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.531222 4886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.531288 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.531389 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.531394 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.531483 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.533492 4886 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536632 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536655 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536663 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536670 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536683 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536691 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536700 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536712 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536722 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536737 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536749 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.536757 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.539474 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.540117 4886 server.go:1280] "Started kubelet" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.540601 4886 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.540583 4886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.542035 4886 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:21:58 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.543734 4886 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.544567 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.559615 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.559688 4886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.560179 4886 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.560191 4886 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.560225 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:18:25.215095886 +0000 UTC Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.560306 4886 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.560362 4886 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.561107 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="200ms" Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.561155 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.561897 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.566612 4886 factory.go:55] Registering systemd factory Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.566637 4886 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.569159 4886 factory.go:153] Registering CRI-O factory Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.569210 4886 factory.go:221] Registration of the crio container factory successfully Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.569435 4886 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.569483 4886 factory.go:103] Registering Raw factory Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.569507 4886 manager.go:1196] Started watching for new ooms in manager Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.568622 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f4027dd44a59a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:21:58.540060058 +0000 UTC m=+1.448779330,LastTimestamp:2026-01-29 16:21:58.540060058 +0000 UTC m=+1.448779330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.573173 4886 manager.go:319] Starting recovery of all containers Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.573927 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.573978 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.573991 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574004 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574017 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574043 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574057 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574070 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574085 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574101 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574113 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574139 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574151 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574167 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574180 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574191 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574203 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574216 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574229 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574243 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574258 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574268 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574281 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574294 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574306 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574339 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574361 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574376 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574388 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574399 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574411 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574422 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574436 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574447 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574459 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574470 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574481 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574492 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574504 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574515 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574546 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574561 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574572 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574588 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574602 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574615 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574627 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574641 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574654 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574666 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574677 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574690 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574708 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574721 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574754 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574768 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574781 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574793 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574805 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574824 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574837 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574851 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574865 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574880 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574893 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574906 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574919 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574931 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574942 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574954 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574972 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574988 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.574999 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575013 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575025 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575035 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575047 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575059 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575070 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575081 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575090 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575101 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575109 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575118 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575127 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575145 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575155 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575165 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575174 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575183 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575193 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575205 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575216 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575227 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575239 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575252 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575263 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575275 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575292 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575340 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575354 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575365 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575375 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575383 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575398 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575409 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575421 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575434 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575445 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575457 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575469 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575483 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575495 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575508 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575521 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575534 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575546 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575557 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575569 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575581 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575591 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575601 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575611 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575620 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575629 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575639 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575648 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575658 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575667 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575676 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575684 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575695 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575704 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575712 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575721 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575731 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575740 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575748 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.575757 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578257 4886 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578287 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578303 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578314 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578377 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578393 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578404 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578414 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578424 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578434 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578448 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578470 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578485 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578502 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578513 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578522 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578532 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578548 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578558 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578568 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578580 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578591 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578602 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578613 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578624 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578636 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578647 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578658 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578670 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578682 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578691 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578702 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578713 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578723 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578734 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578744 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578755 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578765 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578776 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578786 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578797 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578807 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578817 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578827 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578838 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578847 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578856 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578867 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578885 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578907 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578921 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578934 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578949 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578962 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578976 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.578988 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579002 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579016 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579030 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579041 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579051 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579062 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579073 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579083 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579094 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579106 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579115 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579125 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579149 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579158 4886 reconstruct.go:97] "Volume reconstruction finished" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.579165 4886 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.592573 4886 manager.go:324] Recovery completed Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.602132 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.604192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.604253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.604264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.605022 4886 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.605051 4886 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.605080 4886 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.611380 4886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.613272 4886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.613540 4886 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.613669 4886 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.613795 4886 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:21:58 crc kubenswrapper[4886]: W0129 16:21:58.614149 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.614198 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.655296 4886 policy_none.go:49] "None policy: Start" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.656418 4886 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.656485 4886 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.660649 4886 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.713978 4886 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.720960 4886 manager.go:334] "Starting Device Plugin manager" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.721235 4886 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.721258 4886 server.go:79] "Starting device plugin registration server" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.721773 4886 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.721815 4886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.722055 4886 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.722188 4886 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.722210 4886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.731843 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.762073 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="400ms" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.822289 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.823499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.823532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.823544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.823568 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:21:58 crc kubenswrapper[4886]: E0129 16:21:58.824203 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.174:6443: connect: connection refused" node="crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.914704 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.914942 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.916433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.916463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.916472 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.916575 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.917478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.917543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.917559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.918442 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.918485 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.918543 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.918443 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.918674 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.919942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.919968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.919975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920174 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920253 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920431 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.920493 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.921057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.921082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.921093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.921304 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.921595 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.921722 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.922949 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923305 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923389 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.923930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.924779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.924838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.924867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984465 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984549 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984594 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984637 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984734 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984773 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.984999 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.985035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.985060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.985086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.985116 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.985149 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:58 crc kubenswrapper[4886]: I0129 16:21:58.985175 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.024915 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.027282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.027361 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.027385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.027426 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.028180 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.174:6443: connect: connection refused" node="crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086820 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086986 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086986 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.086942 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087037 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087062 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087086 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087106 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087132 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087208 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087234 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087215 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087226 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087272 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087210 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087280 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087296 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087642 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.087757 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.163843 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="800ms" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.250842 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.271789 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.305988 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.324055 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.332249 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.335707 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-db0166c2ca13ccc4833d66c7d9f4d288a92c19bb91ede90aa2b3435b12be261a WatchSource:0}: Error finding container db0166c2ca13ccc4833d66c7d9f4d288a92c19bb91ede90aa2b3435b12be261a: Status 404 returned error can't find the container with id db0166c2ca13ccc4833d66c7d9f4d288a92c19bb91ede90aa2b3435b12be261a Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.337370 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0f12b17f572014446b79eaf5bf007afdda367530f53567c43340fb31c15dd9d8 WatchSource:0}: Error finding container 0f12b17f572014446b79eaf5bf007afdda367530f53567c43340fb31c15dd9d8: Status 404 returned error can't find the container with id 0f12b17f572014446b79eaf5bf007afdda367530f53567c43340fb31c15dd9d8 Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.351057 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-31462ef6b9a4e70f7831fc4cdf52adece1d62f0d34f9b4df226be1ac446491e7 WatchSource:0}: Error finding container 31462ef6b9a4e70f7831fc4cdf52adece1d62f0d34f9b4df226be1ac446491e7: Status 404 returned error can't find the container with id 31462ef6b9a4e70f7831fc4cdf52adece1d62f0d34f9b4df226be1ac446491e7 Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.354684 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c5e321cc74329633bbce0bddcf113395cc124254d8787268a1866bc6df09fc19 WatchSource:0}: Error finding container c5e321cc74329633bbce0bddcf113395cc124254d8787268a1866bc6df09fc19: Status 404 returned error can't find the container with id c5e321cc74329633bbce0bddcf113395cc124254d8787268a1866bc6df09fc19 Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.357719 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-40313624bc1b9af237d82499ebbc6af322985d82cfe413c22df1e46b6ba444fa WatchSource:0}: Error finding container 40313624bc1b9af237d82499ebbc6af322985d82cfe413c22df1e46b6ba444fa: Status 404 returned error can't find the container with id 40313624bc1b9af237d82499ebbc6af322985d82cfe413c22df1e46b6ba444fa Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.428515 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.429609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.429644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.429655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.429679 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.430125 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.174:6443: connect: connection refused" node="crc" Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.539717 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.539816 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.545755 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.561142 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:44:38.872376122 +0000 UTC Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.618439 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"db0166c2ca13ccc4833d66c7d9f4d288a92c19bb91ede90aa2b3435b12be261a"} Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.619721 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"40313624bc1b9af237d82499ebbc6af322985d82cfe413c22df1e46b6ba444fa"} Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.621286 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c5e321cc74329633bbce0bddcf113395cc124254d8787268a1866bc6df09fc19"} Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.622309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"31462ef6b9a4e70f7831fc4cdf52adece1d62f0d34f9b4df226be1ac446491e7"} Jan 29 16:21:59 crc kubenswrapper[4886]: I0129 16:21:59.623292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f12b17f572014446b79eaf5bf007afdda367530f53567c43340fb31c15dd9d8"} Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.857803 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.857924 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.942137 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f4027dd44a59a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:21:58.540060058 +0000 UTC m=+1.448779330,LastTimestamp:2026-01-29 16:21:58.540060058 +0000 UTC m=+1.448779330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.965054 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="1.6s" Jan 29 16:21:59 crc kubenswrapper[4886]: W0129 16:21:59.973105 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:21:59 crc kubenswrapper[4886]: E0129 16:21:59.973219 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:00 crc kubenswrapper[4886]: W0129 16:22:00.051697 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:00 crc kubenswrapper[4886]: E0129 16:22:00.051824 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.230377 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.231721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.231754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.231763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.231786 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:22:00 crc kubenswrapper[4886]: E0129 16:22:00.232264 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.174:6443: connect: connection refused" node="crc" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.421982 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:22:00 crc kubenswrapper[4886]: E0129 16:22:00.423401 4886 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.546198 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:00 crc kubenswrapper[4886]: I0129 16:22:00.561360 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:11:25.941051396 +0000 UTC Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.546576 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.561822 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:39:07.58613199 +0000 UTC Jan 29 16:22:01 crc kubenswrapper[4886]: E0129 16:22:01.565874 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="3.2s" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.631181 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d" exitCode=0 Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.631264 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d"} Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.631429 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.632781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.632815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.632828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.634179 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ca6e179befd30088d295c36ea434f98e4293fb03c8eae8b204cccfbbce08b15" exitCode=0 Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.634233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ca6e179befd30088d295c36ea434f98e4293fb03c8eae8b204cccfbbce08b15"} Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.634271 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.635148 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.635214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.635225 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.636703 4886 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692" exitCode=0 Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.636806 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.636834 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692"} Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.638428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.638498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.638522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.639012 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08" exitCode=0 Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.639079 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08"} Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.639224 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.640797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.640858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.640905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.643297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847"} Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.643393 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08"} Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.644525 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.645814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.645879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.645905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:01 crc kubenswrapper[4886]: W0129 16:22:01.736854 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:01 crc kubenswrapper[4886]: E0129 16:22:01.736937 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.833207 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.834570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.834609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.834620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:01 crc kubenswrapper[4886]: I0129 16:22:01.834645 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:22:01 crc kubenswrapper[4886]: E0129 16:22:01.834970 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.174:6443: connect: connection refused" node="crc" Jan 29 16:22:02 crc kubenswrapper[4886]: W0129 16:22:02.500733 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:02 crc kubenswrapper[4886]: E0129 16:22:02.500822 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:02 crc kubenswrapper[4886]: I0129 16:22:02.546487 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:02 crc kubenswrapper[4886]: I0129 16:22:02.562728 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:33:22.23279698 +0000 UTC Jan 29 16:22:02 crc kubenswrapper[4886]: W0129 16:22:02.856990 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:02 crc kubenswrapper[4886]: E0129 16:22:02.857141 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:02 crc kubenswrapper[4886]: W0129 16:22:02.976782 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:02 crc kubenswrapper[4886]: E0129 16:22:02.976918 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.546144 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.563635 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:40:06.333174082 +0000 UTC Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.650034 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30"} Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.652630 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a09015f4cf412b00af42b12364de032e35bb3e11014cac2c07375cb3b2c24a44"} Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.656184 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7033f095ef8ab16159973cb399d8e8a5a3e199e975168e9097142354bd73662c" exitCode=0 Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.656260 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7033f095ef8ab16159973cb399d8e8a5a3e199e975168e9097142354bd73662c"} Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.656371 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.657399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.657425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.657435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.659000 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9"} Jan 29 16:22:03 crc kubenswrapper[4886]: I0129 16:22:03.660385 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc"} Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.545416 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.564662 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:40:43.865457644 +0000 UTC Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.565828 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:22:04 crc kubenswrapper[4886]: E0129 16:22:04.567143 4886 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.666520 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0"} Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.669606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab"} Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.669685 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.670959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.671007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:04 crc kubenswrapper[4886]: I0129 16:22:04.671025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:04 crc kubenswrapper[4886]: E0129 16:22:04.767715 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="6.4s" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.035537 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.037175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.037223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.037235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.037265 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:22:05 crc kubenswrapper[4886]: E0129 16:22:05.038021 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.174:6443: connect: connection refused" node="crc" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.546289 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.565631 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:27:38.224799953 +0000 UTC Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.676850 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981"} Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.679271 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a03b7f0f6736466493bbb77107b7e4fdc23172af590b052f250e1b1bcc118b95"} Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.679378 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.680390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.680440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:05 crc kubenswrapper[4886]: I0129 16:22:05.680458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:06 crc kubenswrapper[4886]: W0129 16:22:06.413030 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:06 crc kubenswrapper[4886]: E0129 16:22:06.413457 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.546513 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.566149 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:24:30.292167033 +0000 UTC Jan 29 16:22:06 crc kubenswrapper[4886]: W0129 16:22:06.654237 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:06 crc kubenswrapper[4886]: E0129 16:22:06.654357 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.679973 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.687925 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a03b7f0f6736466493bbb77107b7e4fdc23172af590b052f250e1b1bcc118b95" exitCode=0 Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.688001 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a03b7f0f6736466493bbb77107b7e4fdc23172af590b052f250e1b1bcc118b95"} Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.688066 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.689370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.689419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.689436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.690852 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce"} Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.690961 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.691983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.692025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:06 crc kubenswrapper[4886]: I0129 16:22:06.692038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.546691 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.567130 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:29:56.976948581 +0000 UTC Jan 29 16:22:07 crc kubenswrapper[4886]: W0129 16:22:07.612249 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:07 crc kubenswrapper[4886]: E0129 16:22:07.612409 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.714143 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51ae84766f6fd6012185ad521aaa074ccb307fb2074593a759cdfa88435aa0a5"} Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.714230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e2b23e01e4fc8b12affd97c32d95cf16d2f3875819542d3fcd126234873b7528"} Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.718692 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.719233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff"} Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.719274 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749"} Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.719642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.719666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:07 crc kubenswrapper[4886]: I0129 16:22:07.719678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.084522 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.084707 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.084990 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.085063 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.085876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.085914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.085929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:08 crc kubenswrapper[4886]: W0129 16:22:08.091914 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:08 crc kubenswrapper[4886]: E0129 16:22:08.091990 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.174:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.546163 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.567554 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:32:59.558922315 +0000 UTC Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.726828 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e42e3dc10ed392584f907c3510987ac8b9029fcf7b99794215a9df89ec88316d"} Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.727077 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bffa2a70b7b3e4fcc2affbab9ae07a6e36c7b16229ec989278fc4b23aa9c19af"} Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.727168 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d9711c336b256c8670f04cb7ca2e22ee85f6907ada1b25df3f1e53b84bb5b40"} Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.726946 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.728613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.728663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.728674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:08 crc kubenswrapper[4886]: E0129 16:22:08.732094 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.733172 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6be22d482612669d23be9f69224292703dfd24c4a606c5856aa595794a280227"} Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.733377 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.734530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.734654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.734771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.850500 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.850677 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.852374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.852404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:08 crc kubenswrapper[4886]: I0129 16:22:08.852414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.141782 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.546194 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.174:6443: connect: connection refused Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.568593 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:01:03.189832516 +0000 UTC Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.735867 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.735937 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.736260 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.737401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.737522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.737551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.737782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.737901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:09 crc kubenswrapper[4886]: I0129 16:22:09.738065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:09 crc kubenswrapper[4886]: E0129 16:22:09.943738 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f4027dd44a59a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:21:58.540060058 +0000 UTC m=+1.448779330,LastTimestamp:2026-01-29 16:21:58.540060058 +0000 UTC m=+1.448779330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.473689 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.552255 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.569705 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:12:46.911039432 +0000 UTC Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.741391 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.743410 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6be22d482612669d23be9f69224292703dfd24c4a606c5856aa595794a280227" exitCode=255 Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.743533 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.743662 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6be22d482612669d23be9f69224292703dfd24c4a606c5856aa595794a280227"} Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.743696 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.744418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.744466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.744478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.745084 4886 scope.go:117] "RemoveContainer" containerID="6be22d482612669d23be9f69224292703dfd24c4a606c5856aa595794a280227" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.745270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.745306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:10 crc kubenswrapper[4886]: I0129 16:22:10.745318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.438897 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.440925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.441051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.441068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.441110 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.570266 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:35:00.315800069 +0000 UTC Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.748759 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.751846 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88"} Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.752008 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.753048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.753093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:11 crc kubenswrapper[4886]: I0129 16:22:11.753102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:12 crc kubenswrapper[4886]: I0129 16:22:12.571156 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:07:43.831192687 +0000 UTC Jan 29 16:22:12 crc kubenswrapper[4886]: I0129 16:22:12.755709 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:12 crc kubenswrapper[4886]: I0129 16:22:12.755801 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:12 crc kubenswrapper[4886]: I0129 16:22:12.757239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:12 crc kubenswrapper[4886]: I0129 16:22:12.757387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:12 crc kubenswrapper[4886]: I0129 16:22:12.757424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.077660 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.078124 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.080565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.080672 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.080693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.085101 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.157797 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.158071 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.159669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.159721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.159748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.260518 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.571606 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:40:52.60459419 +0000 UTC Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.757930 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.757960 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.759223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.759268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.759282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.759494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.759519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.759531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:13 crc kubenswrapper[4886]: I0129 16:22:13.763648 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:14 crc kubenswrapper[4886]: I0129 16:22:14.572158 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:37:07.217443169 +0000 UTC Jan 29 16:22:14 crc kubenswrapper[4886]: I0129 16:22:14.760068 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:14 crc kubenswrapper[4886]: I0129 16:22:14.761206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:14 crc kubenswrapper[4886]: I0129 16:22:14.761303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:14 crc kubenswrapper[4886]: I0129 16:22:14.761415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:15 crc kubenswrapper[4886]: I0129 16:22:15.572676 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:16:48.977685952 +0000 UTC Jan 29 16:22:15 crc kubenswrapper[4886]: I0129 16:22:15.923462 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:22:15 crc kubenswrapper[4886]: I0129 16:22:15.923683 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:15 crc kubenswrapper[4886]: I0129 16:22:15.925066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:15 crc kubenswrapper[4886]: I0129 16:22:15.925155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:15 crc kubenswrapper[4886]: I0129 16:22:15.925179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:16 crc kubenswrapper[4886]: I0129 16:22:16.573412 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:20:33.55426693 +0000 UTC Jan 29 16:22:17 crc kubenswrapper[4886]: I0129 16:22:17.459539 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:22:17 crc kubenswrapper[4886]: I0129 16:22:17.459631 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:22:17 crc kubenswrapper[4886]: I0129 16:22:17.472363 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:22:17 crc kubenswrapper[4886]: I0129 16:22:17.472459 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:22:17 crc kubenswrapper[4886]: I0129 16:22:17.574173 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:12:08.62062184 +0000 UTC Jan 29 16:22:18 crc kubenswrapper[4886]: I0129 16:22:18.574954 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:55:02.415905138 +0000 UTC Jan 29 16:22:18 crc kubenswrapper[4886]: E0129 16:22:18.732352 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.221208 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.221595 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.223389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.223442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.223455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.244724 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.575377 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:41:45.20322849 +0000 UTC Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.774886 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.776386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.776438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:19 crc kubenswrapper[4886]: I0129 16:22:19.776454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.482964 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.483228 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.484923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.484992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.485011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.491660 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.576392 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:13:06.112487324 +0000 UTC Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.777280 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.778257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.778298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:20 crc kubenswrapper[4886]: I0129 16:22:20.778306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:21 crc kubenswrapper[4886]: I0129 16:22:21.086464 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:22:21 crc kubenswrapper[4886]: I0129 16:22:21.086581 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:22:21 crc kubenswrapper[4886]: I0129 16:22:21.576927 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:55:04.052150702 +0000 UTC Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.452611 4886 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.453436 4886 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.453940 4886 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.456438 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.457898 4886 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.470965 4886 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.475608 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.536072 4886 apiserver.go:52] "Watching apiserver" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.555997 4886 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.556380 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.556898 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.557004 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.557099 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.557118 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.557251 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.557567 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.557692 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.557717 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.557756 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.559425 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.559654 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.560896 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.561285 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.561575 4886 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.561966 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.561987 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.562043 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.562254 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.562758 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.577416 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:35:26.361440267 +0000 UTC Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.620725 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.647120 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655077 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655117 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655153 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655219 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655250 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655280 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655311 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655392 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655428 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655460 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655492 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655525 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655556 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655588 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655619 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655651 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655692 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655724 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655790 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655777 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655824 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655817 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655861 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655942 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655941 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.655978 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656029 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656087 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656132 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656174 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656211 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656242 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656308 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656389 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656422 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656572 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656615 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656654 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656691 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656726 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656794 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656831 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656868 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656907 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656939 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656980 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657022 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657055 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657087 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657125 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657158 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657193 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657228 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657262 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657295 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657352 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657385 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657423 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657458 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657487 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657512 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657537 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657565 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657598 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657636 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657674 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657720 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657756 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657791 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657824 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657862 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657899 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657930 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657960 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657998 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658068 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658099 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658131 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658163 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658200 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658234 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658271 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658306 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658374 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658400 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658427 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658452 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658478 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658503 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658530 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658555 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658578 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658604 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658628 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658653 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658682 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658717 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658755 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658787 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658819 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658852 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658891 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658924 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658959 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658990 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659022 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659057 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659083 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659114 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659142 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659200 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659239 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659274 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659308 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659557 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659596 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659634 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659665 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659729 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659793 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659820 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659849 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659877 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659905 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659975 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660006 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660058 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660120 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660150 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660181 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660215 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660251 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660290 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660353 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660394 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660427 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660460 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660497 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660568 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660601 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660638 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660674 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660710 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660747 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660790 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660822 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660894 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660934 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660965 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660997 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661030 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661054 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661078 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661102 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661127 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661151 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661175 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661198 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661221 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661247 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661270 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661296 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661320 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661369 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661392 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661418 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661440 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661464 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661513 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661534 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661557 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661579 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661603 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661627 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661784 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661814 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661840 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661863 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661890 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661914 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661937 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661962 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662007 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662031 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662063 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662128 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662164 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662247 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662272 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662303 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662385 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662437 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662466 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662516 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662595 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662612 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662629 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662641 4886 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656116 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656192 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656214 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656417 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656429 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656437 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656470 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656537 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.656593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670196 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657011 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657213 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657232 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657253 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657269 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657383 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657460 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657461 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657537 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657633 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657745 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657770 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657932 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657964 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.657990 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658071 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658080 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658187 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658229 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658270 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.658816 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659535 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.659716 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660277 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660642 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.660850 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661027 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661189 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661388 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661577 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661739 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.661927 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662216 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.662510 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.663640 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.663825 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.663879 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.663865 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.663870 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.664243 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.664550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.664862 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665028 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665162 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665385 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665606 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665684 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665792 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.665807 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666016 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666083 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666249 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666286 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666541 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666660 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.666846 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667019 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667082 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667224 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667835 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.667856 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.668248 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.668658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.668698 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.668717 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669010 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669216 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669368 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669396 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669419 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669508 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669666 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669690 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669736 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669902 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670886 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670914 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671094 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.671140 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:23.171055555 +0000 UTC m=+26.079774867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671283 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671350 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671508 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671520 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669954 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670296 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670622 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670537 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.670737 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.671645 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:22:23.171610901 +0000 UTC m=+26.080330243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671808 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.671916 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672188 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672295 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672313 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672628 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672727 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.672850 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673001 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673253 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673286 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673568 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673695 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673763 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673932 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.673983 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.674089 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.674271 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.674466 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.674705 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:23.174690647 +0000 UTC m=+26.083410029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.675063 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.675231 4886 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.675698 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.675896 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.669941 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676184 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676241 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676232 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676316 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676260 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676067 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676490 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676510 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.676986 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677288 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677304 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677359 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677394 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677620 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677732 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677795 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.677857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.678073 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.678566 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.678812 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.679785 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.680491 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.680971 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.681984 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.682020 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.682106 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.682132 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.682218 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.681168 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.681179 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.681189 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.682500 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.674573 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.682769 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.683143 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.683299 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.683353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.684006 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.684317 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.685149 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.685350 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.685494 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.685543 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.685539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.686469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.686559 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.686604 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.686687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.687281 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.687683 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.691483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.692538 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.692873 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.693059 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.693203 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.693491 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.693763 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.694017 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.694646 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.695804 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.698402 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.698825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.699071 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.699259 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.699353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.699396 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.699747 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.699900 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.699927 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.699943 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.700018 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:23.199994339 +0000 UTC m=+26.108713611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.700077 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.700146 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.700227 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:22 crc kubenswrapper[4886]: E0129 16:22:22.700410 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:23.200398111 +0000 UTC m=+26.109117383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.700099 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.702170 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.702298 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.705548 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.710991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.719722 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.720424 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.723555 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.726482 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.735587 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.739214 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763053 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763177 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763190 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763200 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763210 4886 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763219 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763230 4886 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763239 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763248 4886 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763257 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763266 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763274 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763285 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763294 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763303 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763302 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763314 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763389 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763406 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763428 4886 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763438 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763447 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763456 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763464 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763473 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763483 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763491 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763501 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763510 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763519 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763530 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763539 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763548 4886 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763557 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763566 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763576 4886 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763584 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763593 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763602 4886 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763610 4886 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763619 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763628 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763638 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763647 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763655 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763663 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763672 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763680 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763689 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763700 4886 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763710 4886 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763720 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763729 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763738 4886 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763748 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763759 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763770 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763782 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763793 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763805 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763815 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763825 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763836 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763845 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763854 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763864 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763874 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763886 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763895 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763904 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763912 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763921 4886 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763929 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763937 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763946 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763954 4886 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763962 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763970 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763979 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763989 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.763999 4886 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764009 4886 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764022 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764033 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764042 4886 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764052 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764062 4886 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764072 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764081 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764089 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764097 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764106 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764114 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764122 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764130 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764139 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764148 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764156 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764164 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764173 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764182 4886 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764190 4886 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764199 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764210 4886 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764217 4886 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764226 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764235 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764244 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764251 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764262 4886 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764270 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764278 4886 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764286 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764295 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764303 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764311 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764319 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764343 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764351 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764359 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764367 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764375 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764385 4886 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764393 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764401 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764409 4886 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764417 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764426 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764434 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764443 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764452 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764461 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764471 4886 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764485 4886 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764494 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764504 4886 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764512 4886 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764540 4886 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764586 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764597 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764605 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764613 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764621 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764630 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764638 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764648 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764658 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764846 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764869 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764880 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764890 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764901 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764911 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764923 4886 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764934 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764945 4886 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764956 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764966 4886 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764974 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764983 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.764993 4886 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765002 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765047 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765057 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765066 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765077 4886 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765105 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765115 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765124 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765132 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765142 4886 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765151 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765161 4886 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765170 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765180 4886 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765189 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765198 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765206 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765216 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765225 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765235 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765244 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765254 4886 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765263 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765271 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765279 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765287 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765296 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765304 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765312 4886 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765682 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765693 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.765701 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.788587 4886 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.875252 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.887482 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:22:22 crc kubenswrapper[4886]: I0129 16:22:22.895997 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:22:22 crc kubenswrapper[4886]: W0129 16:22:22.896474 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ca0f92e56632f41d14791b4a0417f50bb7a20179295c27e54295c8dd3923b101 WatchSource:0}: Error finding container ca0f92e56632f41d14791b4a0417f50bb7a20179295c27e54295c8dd3923b101: Status 404 returned error can't find the container with id ca0f92e56632f41d14791b4a0417f50bb7a20179295c27e54295c8dd3923b101 Jan 29 16:22:22 crc kubenswrapper[4886]: W0129 16:22:22.901043 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6b6eb21b56074a10884dda129c11731cf801ca722e4f96ea0fe5fd05dd4ceb33 WatchSource:0}: Error finding container 6b6eb21b56074a10884dda129c11731cf801ca722e4f96ea0fe5fd05dd4ceb33: Status 404 returned error can't find the container with id 6b6eb21b56074a10884dda129c11731cf801ca722e4f96ea0fe5fd05dd4ceb33 Jan 29 16:22:22 crc kubenswrapper[4886]: W0129 16:22:22.913192 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-f6b642ea78f1e149c010bdd48524a2e9599b9113abe0cd270e749763a1552d61 WatchSource:0}: Error finding container f6b642ea78f1e149c010bdd48524a2e9599b9113abe0cd270e749763a1552d61: Status 404 returned error can't find the container with id f6b642ea78f1e149c010bdd48524a2e9599b9113abe0cd270e749763a1552d61 Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.269521 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.269581 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.269610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.269633 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.269653 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269735 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269774 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269792 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269805 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269809 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269812 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:24.269789809 +0000 UTC m=+27.178509081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269867 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:24.269854751 +0000 UTC m=+27.178574023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269883 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:24.269877351 +0000 UTC m=+27.178596623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269896 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269942 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.269956 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.270033 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:24.270010565 +0000 UTC m=+27.178729837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:23 crc kubenswrapper[4886]: E0129 16:22:23.270259 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:22:24.270244562 +0000 UTC m=+27.178963934 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.577891 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:02:29.446259011 +0000 UTC Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.790505 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6b6eb21b56074a10884dda129c11731cf801ca722e4f96ea0fe5fd05dd4ceb33"} Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.791711 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ca0f92e56632f41d14791b4a0417f50bb7a20179295c27e54295c8dd3923b101"} Jan 29 16:22:23 crc kubenswrapper[4886]: I0129 16:22:23.793137 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f6b642ea78f1e149c010bdd48524a2e9599b9113abe0cd270e749763a1552d61"} Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.084096 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.084191 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.278170 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.278251 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.278284 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.278310 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278363 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:22:26.278317431 +0000 UTC m=+29.187036713 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.278400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278467 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278528 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278570 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278587 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278528 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278660 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278669 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278547 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:26.278536067 +0000 UTC m=+29.187255349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278470 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278721 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:26.278697012 +0000 UTC m=+29.187416344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278745 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:26.278736423 +0000 UTC m=+29.187455785 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.278764 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:26.278753033 +0000 UTC m=+29.187472395 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.578833 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:46:08.609365938 +0000 UTC Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.614684 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.614768 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.614921 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.614993 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.615280 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.615411 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.619968 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.620638 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.621426 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.622138 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.622839 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.623557 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.624506 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.625447 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.626472 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.627148 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.627879 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.628780 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.630847 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.632186 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.633133 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.634667 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.635508 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.636628 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.637392 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.638195 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.639393 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.640174 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.641373 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.642512 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.643214 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.644162 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.645122 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.645855 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.646817 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.647544 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.648226 4886 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.648458 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.652640 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.653573 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.654169 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.656055 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.657276 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.658059 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.659362 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.660176 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.661105 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.661880 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.663065 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.664212 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.664797 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.665386 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.666269 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.667371 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.667884 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.668379 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.669195 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.669769 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.670714 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.671210 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.797775 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.798751 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.800414 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88" exitCode=255 Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.800482 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88"} Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.800545 4886 scope.go:117] "RemoveContainer" containerID="6be22d482612669d23be9f69224292703dfd24c4a606c5856aa595794a280227" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.802276 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67"} Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.803712 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660"} Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.816634 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.828675 4886 scope.go:117] "RemoveContainer" containerID="8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88" Jan 29 16:22:24 crc kubenswrapper[4886]: E0129 16:22:24.828967 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.831051 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.841745 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.852935 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.863058 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.870943 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.879765 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.889892 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6be22d482612669d23be9f69224292703dfd24c4a606c5856aa595794a280227\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:09Z\\\",\\\"message\\\":\\\"W0129 16:22:08.823239 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:22:08.824152 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769703728 cert, and key in /tmp/serving-cert-2358660331/serving-signer.crt, /tmp/serving-cert-2358660331/serving-signer.key\\\\nI0129 16:22:09.456790 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:22:09.460676 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:22:09.460903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:09.462059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2358660331/tls.crt::/tmp/serving-cert-2358660331/tls.key\\\\\\\"\\\\nF0129 16:22:09.680786 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.900970 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.916009 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.928793 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.942697 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.954342 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:24 crc kubenswrapper[4886]: I0129 16:22:24.965293 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.578993 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:59:40.359209742 +0000 UTC Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.809475 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.813124 4886 scope.go:117] "RemoveContainer" containerID="8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88" Jan 29 16:22:25 crc kubenswrapper[4886]: E0129 16:22:25.813480 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.815883 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9"} Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.824468 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.834504 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.845778 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.856556 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.867128 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.880738 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.893546 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.903438 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.913236 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.921997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:25 crc kubenswrapper[4886]: I0129 16:22:25.931862 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.004383 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.019219 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.029363 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.295124 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.295235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295372 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:22:30.295290092 +0000 UTC m=+33.204009394 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295447 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295481 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.295488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.295550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295502 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.295601 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295661 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295768 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:30.295707143 +0000 UTC m=+33.204426495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295697 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295900 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:30.295846797 +0000 UTC m=+33.204566109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295583 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295990 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:30.29595209 +0000 UTC m=+33.204671422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.295993 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.296031 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.296106 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:30.296086344 +0000 UTC m=+33.204805656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.579495 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:25:05.29889457 +0000 UTC Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.614727 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.614762 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.615090 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.614921 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.615288 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:26 crc kubenswrapper[4886]: E0129 16:22:26.615363 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.821508 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4"} Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.838499 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.856250 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.872345 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.888704 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.900105 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.910869 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:26 crc kubenswrapper[4886]: I0129 16:22:26.921725 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:27 crc kubenswrapper[4886]: I0129 16:22:27.581216 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:59:17.18429627 +0000 UTC Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.089438 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.097144 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.099883 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.104174 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.118488 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.131971 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.149042 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.168552 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.183519 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.197414 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.216250 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.228618 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.243341 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.256868 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.272192 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.284439 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.296544 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.310232 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.517721 4886 csr.go:261] certificate signing request csr-4kkjm is approved, waiting to be issued Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.580613 4886 csr.go:257] certificate signing request csr-4kkjm is issued Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.581461 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:25:15.198332666 +0000 UTC Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.614062 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:28 crc kubenswrapper[4886]: E0129 16:22:28.614253 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.614277 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:28 crc kubenswrapper[4886]: E0129 16:22:28.614439 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.614523 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:28 crc kubenswrapper[4886]: E0129 16:22:28.614584 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.629509 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.651390 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.664527 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.679706 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.690280 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.701220 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.714435 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: I0129 16:22:28.741254 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:28 crc kubenswrapper[4886]: E0129 16:22:28.839177 4886 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.313434 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4dstj"] Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.314105 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.314188 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dtrvj"] Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.314939 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.315460 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gx4vp"] Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.315782 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f85c7"] Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.316096 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.316269 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.316627 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.316682 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.316957 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.317173 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.317251 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.320089 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.321298 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.321456 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.321912 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322010 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322080 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322139 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322257 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322501 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322540 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-k8s-cni-cncf-io\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322565 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322682 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnqc\" (UniqueName: \"kubernetes.io/projected/ae17b497-19c0-4f59-93e1-279069e2710a-kube-api-access-jqnqc\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322720 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-kubelet\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-os-release\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322832 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bb307e5-0827-4602-95ff-18dec456002b-hosts-file\") pod \"node-resolver-dtrvj\" (UID: \"8bb307e5-0827-4602-95ff-18dec456002b\") " pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322861 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr6xf\" (UniqueName: \"kubernetes.io/projected/8bb307e5-0827-4602-95ff-18dec456002b-kube-api-access-xr6xf\") pod \"node-resolver-dtrvj\" (UID: \"8bb307e5-0827-4602-95ff-18dec456002b\") " pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322901 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-socket-dir-parent\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322919 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-multus-certs\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322935 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-etc-kubernetes\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.322978 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-cni-bin\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323052 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b415d17e-f329-40e7-8a3f-32881cb5347a-cni-binary-copy\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323096 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323131 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-proxy-tls\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323168 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-mcd-auth-proxy-config\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323190 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-system-cni-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323216 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-cni-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323239 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxtfg\" (UniqueName: \"kubernetes.io/projected/b415d17e-f329-40e7-8a3f-32881cb5347a-kube-api-access-xxtfg\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323262 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae17b497-19c0-4f59-93e1-279069e2710a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-system-cni-dir\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-os-release\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323393 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44ws\" (UniqueName: \"kubernetes.io/projected/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-kube-api-access-h44ws\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323420 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-netns\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323460 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-cni-multus\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323484 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-daemon-config\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323515 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-rootfs\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323538 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-cnibin\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323577 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ae17b497-19c0-4f59-93e1-279069e2710a-cni-binary-copy\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-hostroot\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323645 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-conf-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.323671 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-cnibin\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.327990 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bsnwn"] Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.328845 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.330439 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.330613 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.331019 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.331355 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.331515 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.331587 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.332055 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.335619 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.350795 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.372260 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.386151 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.424931 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-kubelet\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.424986 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-ovn\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425013 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovn-node-metrics-cert\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425047 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-var-lib-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-log-socket\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-os-release\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425118 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-netns\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425141 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-node-log\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bb307e5-0827-4602-95ff-18dec456002b-hosts-file\") pod \"node-resolver-dtrvj\" (UID: \"8bb307e5-0827-4602-95ff-18dec456002b\") " pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425196 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-systemd\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-bin\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425243 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr6xf\" (UniqueName: \"kubernetes.io/projected/8bb307e5-0827-4602-95ff-18dec456002b-kube-api-access-xr6xf\") pod \"node-resolver-dtrvj\" (UID: \"8bb307e5-0827-4602-95ff-18dec456002b\") " pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425298 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-socket-dir-parent\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-multus-certs\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425357 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-etc-kubernetes\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-systemd-units\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425404 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-cni-bin\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425426 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-kubelet\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-config\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425473 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b415d17e-f329-40e7-8a3f-32881cb5347a-cni-binary-copy\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425498 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425520 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-proxy-tls\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-mcd-auth-proxy-config\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425586 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-system-cni-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425609 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-cni-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425636 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxtfg\" (UniqueName: \"kubernetes.io/projected/b415d17e-f329-40e7-8a3f-32881cb5347a-kube-api-access-xxtfg\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae17b497-19c0-4f59-93e1-279069e2710a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425686 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-os-release\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425713 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-etc-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-system-cni-dir\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425812 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44ws\" (UniqueName: \"kubernetes.io/projected/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-kube-api-access-h44ws\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425831 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-netns\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425853 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-cni-multus\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425874 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-daemon-config\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425894 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-env-overrides\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-rootfs\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425936 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-cnibin\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ae17b497-19c0-4f59-93e1-279069e2710a-cni-binary-copy\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.425989 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-slash\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426010 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-hostroot\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426031 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-conf-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426052 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-script-lib\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426075 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-cnibin\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426100 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-netd\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqnqc\" (UniqueName: \"kubernetes.io/projected/ae17b497-19c0-4f59-93e1-279069e2710a-kube-api-access-jqnqc\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426147 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8f8x\" (UniqueName: \"kubernetes.io/projected/d46238ab-90d4-41b8-b546-6dbff06cf5ed-kube-api-access-h8f8x\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-k8s-cni-cncf-io\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426243 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-k8s-cni-cncf-io\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426292 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-kubelet\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426679 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bb307e5-0827-4602-95ff-18dec456002b-hosts-file\") pod \"node-resolver-dtrvj\" (UID: \"8bb307e5-0827-4602-95ff-18dec456002b\") " pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426726 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-os-release\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-socket-dir-parent\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-multus-certs\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426871 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-etc-kubernetes\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.426903 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-cni-bin\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.427161 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-system-cni-dir\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.427388 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-run-netns\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.427434 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-host-var-lib-cni-multus\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.427612 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b415d17e-f329-40e7-8a3f-32881cb5347a-cni-binary-copy\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.427884 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-cnibin\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.427920 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-conf-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428016 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-cni-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-rootfs\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428074 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-os-release\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-system-cni-dir\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428215 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-hostroot\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b415d17e-f329-40e7-8a3f-32881cb5347a-cnibin\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428274 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b415d17e-f329-40e7-8a3f-32881cb5347a-multus-daemon-config\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428528 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-mcd-auth-proxy-config\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428641 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae17b497-19c0-4f59-93e1-279069e2710a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.428805 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ae17b497-19c0-4f59-93e1-279069e2710a-cni-binary-copy\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.439148 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-proxy-tls\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.440396 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae17b497-19c0-4f59-93e1-279069e2710a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.441535 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.447909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44ws\" (UniqueName: \"kubernetes.io/projected/5a5d8fc0-7aa5-431a-9add-9bdcc6d20091-kube-api-access-h44ws\") pod \"machine-config-daemon-gx4vp\" (UID: \"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\") " pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.448636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr6xf\" (UniqueName: \"kubernetes.io/projected/8bb307e5-0827-4602-95ff-18dec456002b-kube-api-access-xr6xf\") pod \"node-resolver-dtrvj\" (UID: \"8bb307e5-0827-4602-95ff-18dec456002b\") " pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.468104 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxtfg\" (UniqueName: \"kubernetes.io/projected/b415d17e-f329-40e7-8a3f-32881cb5347a-kube-api-access-xxtfg\") pod \"multus-4dstj\" (UID: \"b415d17e-f329-40e7-8a3f-32881cb5347a\") " pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.471433 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqnqc\" (UniqueName: \"kubernetes.io/projected/ae17b497-19c0-4f59-93e1-279069e2710a-kube-api-access-jqnqc\") pod \"multus-additional-cni-plugins-f85c7\" (UID: \"ae17b497-19c0-4f59-93e1-279069e2710a\") " pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.477974 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.479719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.479756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.479779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.480012 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.501014 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.504578 4886 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.504808 4886 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.506768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.506796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.506804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.506819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.506828 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.513389 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: E0129 16:22:29.524584 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526062 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526591 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-ovn\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526700 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovn-node-metrics-cert\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-var-lib-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526702 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-ovn\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526839 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-log-socket\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526967 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-log-socket\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.526968 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-var-lib-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527045 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-netns\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-node-log\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527155 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-bin\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527182 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-node-log\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527210 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-systemd\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-systemd-units\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527238 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-bin\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-systemd\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527264 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-config\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527341 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527377 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-systemd-units\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527636 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-kubelet\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527863 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527730 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-kubelet\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527916 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.527982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-config\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528090 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-etc-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528171 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528262 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528208 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-etc-openvswitch\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-env-overrides\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528441 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-slash\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528476 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-script-lib\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-slash\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528513 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-netd\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528541 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8f8x\" (UniqueName: \"kubernetes.io/projected/d46238ab-90d4-41b8-b546-6dbff06cf5ed-kube-api-access-h8f8x\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.528574 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-netd\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.529010 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-env-overrides\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.529156 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-script-lib\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.529231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-netns\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.529407 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovn-node-metrics-cert\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.539652 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: E0129 16:22:29.541915 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.544654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8f8x\" (UniqueName: \"kubernetes.io/projected/d46238ab-90d4-41b8-b546-6dbff06cf5ed-kube-api-access-h8f8x\") pod \"ovnkube-node-bsnwn\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.545275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.545417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.545498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.545570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.545633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.550024 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: E0129 16:22:29.559885 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.563707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.563766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.563777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.563800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.563812 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.570600 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: E0129 16:22:29.576378 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.579798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.579860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.579871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.579892 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.579903 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.581469 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 16:17:28 +0000 UTC, rotation deadline is 2026-10-25 20:36:32.418189876 +0000 UTC Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.581510 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6460h14m2.836682397s for next certificate rotation Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.581526 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:46:59.063814508 +0000 UTC Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.585412 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: E0129 16:22:29.593824 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: E0129 16:22:29.593969 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.596102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.596151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.596164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.596183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.596199 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.599684 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.612240 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.627378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.630359 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4dstj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.638660 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dtrvj" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.639986 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: W0129 16:22:29.641866 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb415d17e_f329_40e7_8a3f_32881cb5347a.slice/crio-ead17cba13f7ff56130b29f0f8f1b785314e62c7c541bed7b12f694984043839 WatchSource:0}: Error finding container ead17cba13f7ff56130b29f0f8f1b785314e62c7c541bed7b12f694984043839: Status 404 returned error can't find the container with id ead17cba13f7ff56130b29f0f8f1b785314e62c7c541bed7b12f694984043839 Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.647089 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f85c7" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.657321 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.659953 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.669285 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.672642 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.680417 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.692649 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: W0129 16:22:29.695564 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae17b497_19c0_4f59_93e1_279069e2710a.slice/crio-35f870438ce47ed76208be13fdb1bae4a71360b78956db1c93f6fb568ce3193c WatchSource:0}: Error finding container 35f870438ce47ed76208be13fdb1bae4a71360b78956db1c93f6fb568ce3193c: Status 404 returned error can't find the container with id 35f870438ce47ed76208be13fdb1bae4a71360b78956db1c93f6fb568ce3193c Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.700641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.700696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.700709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.700726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.700743 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.708030 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: W0129 16:22:29.710790 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd46238ab_90d4_41b8_b546_6dbff06cf5ed.slice/crio-4945a9e8ab72e79012e84ebf83643f2ee2b4c4028b579b7a2f7381c763968861 WatchSource:0}: Error finding container 4945a9e8ab72e79012e84ebf83643f2ee2b4c4028b579b7a2f7381c763968861: Status 404 returned error can't find the container with id 4945a9e8ab72e79012e84ebf83643f2ee2b4c4028b579b7a2f7381c763968861 Jan 29 16:22:29 crc kubenswrapper[4886]: W0129 16:22:29.711780 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a5d8fc0_7aa5_431a_9add_9bdcc6d20091.slice/crio-f0a013fabe773541a0659c16cd7cafe73576573429d71f8747acfccadb0ba45f WatchSource:0}: Error finding container f0a013fabe773541a0659c16cd7cafe73576573429d71f8747acfccadb0ba45f: Status 404 returned error can't find the container with id f0a013fabe773541a0659c16cd7cafe73576573429d71f8747acfccadb0ba45f Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.721279 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.803285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.803613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.803625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.803642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.803652 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.832626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerStarted","Data":"ead17cba13f7ff56130b29f0f8f1b785314e62c7c541bed7b12f694984043839"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.834486 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"f0a013fabe773541a0659c16cd7cafe73576573429d71f8747acfccadb0ba45f"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.836340 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"4945a9e8ab72e79012e84ebf83643f2ee2b4c4028b579b7a2f7381c763968861"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.838044 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerStarted","Data":"35f870438ce47ed76208be13fdb1bae4a71360b78956db1c93f6fb568ce3193c"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.840133 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dtrvj" event={"ID":"8bb307e5-0827-4602-95ff-18dec456002b","Type":"ContainerStarted","Data":"63905c573035568056beb92f129e5df0594410c7e45e6948e76a63c5b47f1def"} Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.906431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.906482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.906493 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.906512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:29 crc kubenswrapper[4886]: I0129 16:22:29.906525 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:29Z","lastTransitionTime":"2026-01-29T16:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.009020 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.009062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.009071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.009092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.009103 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.112718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.112767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.112781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.112813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.112826 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.215311 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.215379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.215391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.215411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.215426 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.317647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.317681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.317689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.317705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.317716 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.337402 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.337538 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337578 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:22:38.337544718 +0000 UTC m=+41.246264050 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.337664 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337678 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337698 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337710 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.337732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337781 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:38.337747274 +0000 UTC m=+41.246466546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.337811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337905 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337909 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337932 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337941 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:38.337930179 +0000 UTC m=+41.246649451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337945 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337960 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.337996 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:38.33798458 +0000 UTC m=+41.246703852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.338160 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:38.338126914 +0000 UTC m=+41.246846246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.420786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.420825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.420834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.420866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.420880 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.524127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.524200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.524224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.524246 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.524350 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.582452 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:36:55.284922746 +0000 UTC Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.615022 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.615056 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.615167 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.615208 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.615370 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:30 crc kubenswrapper[4886]: E0129 16:22:30.615478 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.626140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.626190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.626203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.626221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.626231 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.728475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.728522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.728536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.728559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.728572 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.831765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.831809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.831821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.831842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.831858 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.845589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerStarted","Data":"be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.847201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dtrvj" event={"ID":"8bb307e5-0827-4602-95ff-18dec456002b","Type":"ContainerStarted","Data":"8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.848885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.850553 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b" exitCode=0 Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.850658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.852722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerStarted","Data":"91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.868341 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.884942 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.898090 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.914574 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.926243 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.934454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.934485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.934494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.934509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.934519 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:30Z","lastTransitionTime":"2026-01-29T16:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.940194 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.952885 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.961989 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.978054 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:30 crc kubenswrapper[4886]: I0129 16:22:30.989224 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.001973 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.016481 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.028953 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.036778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.036839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.036885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.036907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.036920 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.042205 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.053677 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.063210 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.084229 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.100207 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.111106 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.129023 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.139433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.139486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.139496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.139514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.139526 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.143357 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.156024 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.166463 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.177250 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.188939 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.204772 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.242316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.242381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.242394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.242414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.242428 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.345385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.345439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.345449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.345465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.345476 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.447266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.447317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.447355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.447378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.447399 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.549752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.549804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.549818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.549838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.549858 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.582916 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:33:15.946098259 +0000 UTC Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.652924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.652970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.652992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.653014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.653029 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.755460 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.755581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.755603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.755625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.755642 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.857390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.857428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.857438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.857458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.857469 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.858661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.858721 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.860659 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae17b497-19c0-4f59-93e1-279069e2710a" containerID="be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972" exitCode=0 Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.860727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerDied","Data":"be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.862998 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.865754 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cjsnw"] Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.866481 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.868235 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.868458 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.868471 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.868596 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.876601 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.892763 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.905386 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.954747 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xxd\" (UniqueName: \"kubernetes.io/projected/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-kube-api-access-j8xxd\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.954861 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-host\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.954894 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-serviceca\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.957313 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.980720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.980774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.980787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.980805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.980815 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:31Z","lastTransitionTime":"2026-01-29T16:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.983394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:31 crc kubenswrapper[4886]: I0129 16:22:31.998530 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.011743 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.024868 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.037367 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.049154 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.056552 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xxd\" (UniqueName: \"kubernetes.io/projected/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-kube-api-access-j8xxd\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.056610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-host\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.056657 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-serviceca\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.056759 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-host\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.057664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-serviceca\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.060885 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.072729 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.074395 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xxd\" (UniqueName: \"kubernetes.io/projected/38a68a4f-64a7-404e-8f15-1c299e5a4e2c-kube-api-access-j8xxd\") pod \"node-ca-cjsnw\" (UID: \"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\") " pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.082992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.083027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.083038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.083055 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.083066 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.087479 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.099373 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.112797 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.127644 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.139635 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.153362 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.171724 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.182260 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cjsnw" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.184416 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.185864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.185912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.185925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.185944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.185958 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: W0129 16:22:32.194921 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a68a4f_64a7_404e_8f15_1c299e5a4e2c.slice/crio-244cf09893e2fb94e076edcdf8c64dbe9914b2307f7612efcc075c2bdacc2b65 WatchSource:0}: Error finding container 244cf09893e2fb94e076edcdf8c64dbe9914b2307f7612efcc075c2bdacc2b65: Status 404 returned error can't find the container with id 244cf09893e2fb94e076edcdf8c64dbe9914b2307f7612efcc075c2bdacc2b65 Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.200840 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.213406 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.235854 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.245501 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.258306 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.270422 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.280236 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.290704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.290963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.291031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.291132 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.291659 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.393732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.393791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.393803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.393824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.393838 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.496706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.496777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.496796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.496834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.496859 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.583431 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:01:45.817638682 +0000 UTC Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.600412 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.600822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.600957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.601112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.601256 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.614757 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.614828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:32 crc kubenswrapper[4886]: E0129 16:22:32.614994 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.615045 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:32 crc kubenswrapper[4886]: E0129 16:22:32.615199 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:32 crc kubenswrapper[4886]: E0129 16:22:32.615415 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.703542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.703576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.703585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.703599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.703609 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.805224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.805562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.805573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.805592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.805603 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.867426 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerStarted","Data":"28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.868814 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cjsnw" event={"ID":"38a68a4f-64a7-404e-8f15-1c299e5a4e2c","Type":"ContainerStarted","Data":"567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.868875 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cjsnw" event={"ID":"38a68a4f-64a7-404e-8f15-1c299e5a4e2c","Type":"ContainerStarted","Data":"244cf09893e2fb94e076edcdf8c64dbe9914b2307f7612efcc075c2bdacc2b65"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.872167 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.872235 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.881947 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.896942 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.907717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.907739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.907747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.907762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.907773 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:32Z","lastTransitionTime":"2026-01-29T16:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.910605 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.931906 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.943454 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.957825 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.970203 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.982428 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:32 crc kubenswrapper[4886]: I0129 16:22:32.994817 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.007170 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.009581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.009626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.009641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.009662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.009676 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.017350 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.030397 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.042349 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.056916 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.118470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.118510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.118519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.118537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.118546 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.227721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.227771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.227782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.227801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.227813 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.330426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.330481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.330492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.330512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.330527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.433058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.433089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.433097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.433111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.433120 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.536160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.536200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.536210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.536226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.536239 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.584394 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:35:21.382368129 +0000 UTC Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.639224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.639276 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.639294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.639315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.639356 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.742220 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.742280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.742297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.742356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.742375 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.846030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.846096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.846120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.846151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.846176 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.878224 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.878279 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.895900 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.909784 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.920227 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.948606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.948645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.948657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.948673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.948684 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:33Z","lastTransitionTime":"2026-01-29T16:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.950627 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.964621 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.977737 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:33 crc kubenswrapper[4886]: I0129 16:22:33.991749 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.007796 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.024987 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.036399 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.051061 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.051131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.051143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.051213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.051230 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.052095 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.063672 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.078350 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.084361 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.085274 4886 scope.go:117] "RemoveContainer" containerID="8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88" Jan 29 16:22:34 crc kubenswrapper[4886]: E0129 16:22:34.085545 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.093553 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.154007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.154474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.154487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.154507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.154520 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.257087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.257130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.257146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.257165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.257182 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.359583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.359618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.359634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.359650 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.359660 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.465293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.465353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.465367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.465385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.465397 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.567862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.567904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.567940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.567960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.567973 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.585235 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:11:50.915278961 +0000 UTC Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.614753 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.614790 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.614776 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:34 crc kubenswrapper[4886]: E0129 16:22:34.615018 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:34 crc kubenswrapper[4886]: E0129 16:22:34.615147 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:34 crc kubenswrapper[4886]: E0129 16:22:34.615246 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.670297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.670362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.670373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.670390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.670401 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.773072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.773148 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.773157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.773170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.773180 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.876290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.876364 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.876384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.876409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.876429 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.888049 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae17b497-19c0-4f59-93e1-279069e2710a" containerID="28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d" exitCode=0 Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.888142 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerDied","Data":"28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.907381 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.923343 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.940490 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.962961 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.977201 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.979561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.979592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.979603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.980872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.980893 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:34Z","lastTransitionTime":"2026-01-29T16:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:34 crc kubenswrapper[4886]: I0129 16:22:34.992389 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.006384 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.018194 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.031305 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.047843 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.059070 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.068718 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.080346 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.084140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.084166 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.084177 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.084193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.084203 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.092917 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.187627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.187671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.187681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.187700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.187713 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.291308 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.291416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.291433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.291461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.291477 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.394399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.394517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.394550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.394591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.394616 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.498242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.498315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.498404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.498435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.498454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.586448 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:02:53.854636031 +0000 UTC Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.600443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.600487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.600498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.600514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.600528 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.703026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.703085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.703098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.703118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.703132 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.806182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.806227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.806240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.806259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.806275 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.895845 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.897862 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae17b497-19c0-4f59-93e1-279069e2710a" containerID="db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6" exitCode=0 Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.897908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerDied","Data":"db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.908314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.908400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.908417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.908441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.908459 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:35Z","lastTransitionTime":"2026-01-29T16:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.924306 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.942266 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.954727 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.972686 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.981572 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:35 crc kubenswrapper[4886]: I0129 16:22:35.999632 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.011707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.013839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.013866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.013901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.013933 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.016027 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.037746 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.053029 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.070171 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.083868 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.099097 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.114618 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.117897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.117940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.117953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.117993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.118006 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.127703 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.222047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.222113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.222131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.222159 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.222177 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.325384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.325655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.325766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.325877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.325971 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.428889 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.428947 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.428962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.428985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.428999 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.532865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.533528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.533560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.533596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.533622 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.586980 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:45:19.675990784 +0000 UTC Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.614453 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.614490 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.614503 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:36 crc kubenswrapper[4886]: E0129 16:22:36.614603 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:36 crc kubenswrapper[4886]: E0129 16:22:36.614725 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:36 crc kubenswrapper[4886]: E0129 16:22:36.614797 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.636199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.636238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.636249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.636265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.636277 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.738728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.738789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.738806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.738837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.738856 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.841655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.841698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.841710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.841729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.841740 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.944515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.944550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.944560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.944575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:36 crc kubenswrapper[4886]: I0129 16:22:36.944588 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:36Z","lastTransitionTime":"2026-01-29T16:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.047392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.047460 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.047478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.047506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.047523 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.150385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.150458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.150479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.150507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.150529 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.253962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.254010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.254021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.254041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.254056 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.357157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.357230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.357246 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.357270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.357294 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.460875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.460925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.460937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.460954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.460963 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.564778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.564837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.564854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.564880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.564897 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.588138 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:55:04.444455269 +0000 UTC Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.668236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.668281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.668297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.668320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.668373 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.773783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.773829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.773851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.773879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.773901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.877537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.877618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.877637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.877669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.877687 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.911380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerDied","Data":"bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.911304 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae17b497-19c0-4f59-93e1-279069e2710a" containerID="bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8" exitCode=0 Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.933722 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.956468 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.971423 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.981747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.982072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.982084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.982103 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.982117 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:37Z","lastTransitionTime":"2026-01-29T16:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.982844 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:37 crc kubenswrapper[4886]: I0129 16:22:37.993444 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.020403 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.030037 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.041687 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.054316 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.065491 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.093439 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.093958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.094004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.094016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.094037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.094049 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.133185 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.145514 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.154089 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.196281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.196318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.196343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.196359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.196370 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.291835 4886 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.315230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.315258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.315268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.315284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.315293 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.380652 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.380851 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:22:54.380819001 +0000 UTC m=+57.289538273 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.381141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.381803 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.381857 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:54.38184765 +0000 UTC m=+57.290566922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.381880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.381933 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.381967 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.381982 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382045 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:54.382027925 +0000 UTC m=+57.290747197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.382079 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.382117 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382174 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382203 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:54.38219627 +0000 UTC m=+57.290915542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382297 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382345 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382364 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.382450 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:22:54.382419066 +0000 UTC m=+57.291138498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.417560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.417607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.417620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.417660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.417674 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.519435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.519467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.519475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.519490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.519498 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.589006 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:12:37.894056493 +0000 UTC Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.614886 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.614947 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.614984 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.615061 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.615186 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:38 crc kubenswrapper[4886]: E0129 16:22:38.615234 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.621487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.621532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.621541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.621554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.621563 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.627661 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.640919 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.652966 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.670641 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.690810 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.701975 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.724201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.725414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.725431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.725449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.725460 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.724410 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.740600 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.757408 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.774870 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.793567 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.809308 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.825650 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.827714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.827783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.827800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.827827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.827845 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.852820 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.919912 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerStarted","Data":"725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.927982 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.929200 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.929269 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.929396 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.931035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.931093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.931111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.931132 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.931151 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:38Z","lastTransitionTime":"2026-01-29T16:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.935518 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.954480 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.964247 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.972337 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.973374 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:38 crc kubenswrapper[4886]: I0129 16:22:38.990613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.004670 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.017931 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.031980 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.034100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.034124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.034131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.034145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.034156 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.048694 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.064208 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.077738 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.091865 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.105680 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.124188 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.137269 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.137354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.137371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.137396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.137410 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.139126 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.154514 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.172315 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.188983 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.203869 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.218629 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.233516 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.240994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.241044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.241057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.241077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.241091 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.250981 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.269820 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.294721 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.310930 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.323910 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.343204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.343244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.343253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.343269 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.343279 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.344144 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.355417 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.368053 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.445673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.445704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.445712 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.445725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.445734 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.548796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.548840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.548851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.548868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.548880 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.590182 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:24:22.617816631 +0000 UTC Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.645648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.645703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.645720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.645745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.645763 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: E0129 16:22:39.673173 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.678878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.678950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.678973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.679003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.679021 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: E0129 16:22:39.701198 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.706846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.706910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.706930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.706954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.706971 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: E0129 16:22:39.727378 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.732296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.732387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.732406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.732431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.732449 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: E0129 16:22:39.749237 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.755708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.755766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.755786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.755812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.755833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: E0129 16:22:39.770691 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:39 crc kubenswrapper[4886]: E0129 16:22:39.770876 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.773105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.773155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.773170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.773188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.773201 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.875911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.876428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.876441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.876464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.876476 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.979684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.979898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.979932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.979963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:39 crc kubenswrapper[4886]: I0129 16:22:39.979982 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:39Z","lastTransitionTime":"2026-01-29T16:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.082523 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.082552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.082560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.082573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.082582 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.186267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.186313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.186352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.186375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.186390 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.290115 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.290204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.290227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.290258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.290283 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.393244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.393356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.393378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.393404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.393421 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.504925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.504969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.504977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.504992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.505002 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.590528 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:20:26.750146052 +0000 UTC Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.607805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.607841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.607865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.607883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.607896 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.614498 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:40 crc kubenswrapper[4886]: E0129 16:22:40.614696 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.614757 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.614520 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:40 crc kubenswrapper[4886]: E0129 16:22:40.615216 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:40 crc kubenswrapper[4886]: E0129 16:22:40.614943 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.711060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.711520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.711670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.711816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.711938 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.815284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.815398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.815422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.815456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.815477 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.918211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.918272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.918294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.918356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.918381 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:40Z","lastTransitionTime":"2026-01-29T16:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.939467 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae17b497-19c0-4f59-93e1-279069e2710a" containerID="725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19" exitCode=0 Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.939551 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerDied","Data":"725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19"} Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.962988 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:40 crc kubenswrapper[4886]: I0129 16:22:40.985861 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.004604 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.021543 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.021601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.021619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.021628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.021641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.021650 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.035494 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.056950 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.078636 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.091739 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.103113 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.123533 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.124579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.124601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.124609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.124623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.124633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.133140 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.144855 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.156995 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.171608 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.226059 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.226104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.226116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.226134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.226145 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.314662 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f"] Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.315314 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.318155 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.319160 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.328998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.329041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.329051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.329065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.329073 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.331559 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.342901 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.357794 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.372109 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.384018 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.398140 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.409313 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.420672 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a420fc-ad8c-41c3-82c3-1e23731e1f55-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.420707 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a420fc-ad8c-41c3-82c3-1e23731e1f55-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.420749 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a420fc-ad8c-41c3-82c3-1e23731e1f55-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.420780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94n7k\" (UniqueName: \"kubernetes.io/projected/98a420fc-ad8c-41c3-82c3-1e23731e1f55-kube-api-access-94n7k\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.421275 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.431558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.431583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.431591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.431604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.431614 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.435462 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.451577 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.464441 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.475739 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.484635 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.503624 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.513447 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.521960 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a420fc-ad8c-41c3-82c3-1e23731e1f55-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.521997 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a420fc-ad8c-41c3-82c3-1e23731e1f55-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.522043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a420fc-ad8c-41c3-82c3-1e23731e1f55-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.522068 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94n7k\" (UniqueName: \"kubernetes.io/projected/98a420fc-ad8c-41c3-82c3-1e23731e1f55-kube-api-access-94n7k\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.522516 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a420fc-ad8c-41c3-82c3-1e23731e1f55-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.522682 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a420fc-ad8c-41c3-82c3-1e23731e1f55-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.528524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a420fc-ad8c-41c3-82c3-1e23731e1f55-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.533845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.533873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.533881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.533894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.533903 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.539221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94n7k\" (UniqueName: \"kubernetes.io/projected/98a420fc-ad8c-41c3-82c3-1e23731e1f55-kube-api-access-94n7k\") pod \"ovnkube-control-plane-749d76644c-tpc4f\" (UID: \"98a420fc-ad8c-41c3-82c3-1e23731e1f55\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.591469 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:25:14.387229092 +0000 UTC Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.634270 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.643110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.643157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.643170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.643189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.643202 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: W0129 16:22:41.651942 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98a420fc_ad8c_41c3_82c3_1e23731e1f55.slice/crio-2e860900363cf234d56c149e45db17ab400f881caf5e69df70b4c7a846abf5d9 WatchSource:0}: Error finding container 2e860900363cf234d56c149e45db17ab400f881caf5e69df70b4c7a846abf5d9: Status 404 returned error can't find the container with id 2e860900363cf234d56c149e45db17ab400f881caf5e69df70b4c7a846abf5d9 Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.749123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.749158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.749167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.749182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.749192 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.852642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.852965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.852984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.853008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.853026 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.956139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.956194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.956211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.956233 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:41 crc kubenswrapper[4886]: I0129 16:22:41.956250 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:41Z","lastTransitionTime":"2026-01-29T16:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.009554 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae17b497-19c0-4f59-93e1-279069e2710a" containerID="b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115" exitCode=0 Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.009620 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerDied","Data":"b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.015170 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" event={"ID":"98a420fc-ad8c-41c3-82c3-1e23731e1f55","Type":"ContainerStarted","Data":"2e860900363cf234d56c149e45db17ab400f881caf5e69df70b4c7a846abf5d9"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.026094 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.051775 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.060051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.060097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.060114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.060135 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.060151 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.069475 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.082494 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.093972 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.111583 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.124394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.143549 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.157300 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.162775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.162833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.162852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.162876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.162901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.183996 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.195288 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.207256 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.219278 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.238986 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.250208 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.264983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.265034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.265050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.265090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.265108 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.367569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.367606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.367616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.367631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.367645 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.470545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.470603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.470627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.470652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.470669 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.573155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.573184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.573196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.573213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.573224 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.592428 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:55:56.510234297 +0000 UTC Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.614998 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.615100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.615200 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:42 crc kubenswrapper[4886]: E0129 16:22:42.615224 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:42 crc kubenswrapper[4886]: E0129 16:22:42.615446 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:42 crc kubenswrapper[4886]: E0129 16:22:42.615613 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.675920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.675991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.676012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.676036 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.676055 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.779510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.779591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.779615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.779648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.779671 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.881387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.881422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.881432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.881447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.881460 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.984318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.984367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.984381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.984398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:42 crc kubenswrapper[4886]: I0129 16:22:42.984409 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:42Z","lastTransitionTime":"2026-01-29T16:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.021724 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" event={"ID":"98a420fc-ad8c-41c3-82c3-1e23731e1f55","Type":"ContainerStarted","Data":"6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.021874 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" event={"ID":"98a420fc-ad8c-41c3-82c3-1e23731e1f55","Type":"ContainerStarted","Data":"689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.031320 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" event={"ID":"ae17b497-19c0-4f59-93e1-279069e2710a","Type":"ContainerStarted","Data":"ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.048564 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.071768 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.086452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.086497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.086507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.086522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.086531 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.109495 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.188103 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.188938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.189006 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.189025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.189051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.189069 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.191627 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c7wkw"] Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.192056 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: E0129 16:22:43.192115 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.206927 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.234381 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.247650 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.247730 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psdcc\" (UniqueName: \"kubernetes.io/projected/75261312-030c-44eb-8d08-07a35f5bcfcc-kube-api-access-psdcc\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.248972 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.262995 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.275811 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.286897 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.296377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.296419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.296432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.296449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.296461 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.297001 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.311271 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.324147 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.336916 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.348026 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.348075 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psdcc\" (UniqueName: \"kubernetes.io/projected/75261312-030c-44eb-8d08-07a35f5bcfcc-kube-api-access-psdcc\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: E0129 16:22:43.348199 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:43 crc kubenswrapper[4886]: E0129 16:22:43.348267 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:22:43.848251123 +0000 UTC m=+46.756970385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.350269 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.365704 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.368026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psdcc\" (UniqueName: \"kubernetes.io/projected/75261312-030c-44eb-8d08-07a35f5bcfcc-kube-api-access-psdcc\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.383042 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.394860 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.398278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.398334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.398346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.398362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.398375 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.404238 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.423988 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.434406 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.446961 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.457163 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.486424 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.495890 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.500618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.500650 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.500660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.500676 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.500685 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.508517 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.518613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.531138 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.545294 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.559146 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.574431 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.593396 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:21:32.425759309 +0000 UTC Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.602699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.602726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.602734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.602747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.602756 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.705490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.705548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.705564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.705588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.705604 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.809013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.809076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.809094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.809124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.809145 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.852214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:43 crc kubenswrapper[4886]: E0129 16:22:43.852438 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:43 crc kubenswrapper[4886]: E0129 16:22:43.852543 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:22:44.852514849 +0000 UTC m=+47.761234161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.911432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.911471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.911482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.911499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:43 crc kubenswrapper[4886]: I0129 16:22:43.911510 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:43Z","lastTransitionTime":"2026-01-29T16:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.014524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.014568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.014579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.014611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.014622 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.117546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.117605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.117623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.117648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.117665 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.219816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.220268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.220294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.220359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.220383 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.322764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.322818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.322836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.322857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.322872 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.425388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.425425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.425435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.425470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.425483 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.527681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.527748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.527765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.527792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.527810 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.594241 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:51:12.66355564 +0000 UTC Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.614648 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:44 crc kubenswrapper[4886]: E0129 16:22:44.614785 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.614833 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.614906 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.614924 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:44 crc kubenswrapper[4886]: E0129 16:22:44.615034 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:44 crc kubenswrapper[4886]: E0129 16:22:44.615141 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:44 crc kubenswrapper[4886]: E0129 16:22:44.615235 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.630586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.630621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.630639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.630653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.630661 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.733553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.733581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.733591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.733606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.733617 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.836294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.836321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.836355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.836372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.836382 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.863204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:44 crc kubenswrapper[4886]: E0129 16:22:44.863439 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:44 crc kubenswrapper[4886]: E0129 16:22:44.863506 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:22:46.86348585 +0000 UTC m=+49.772205132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.940561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.940619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.940628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.940643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:44 crc kubenswrapper[4886]: I0129 16:22:44.940671 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:44Z","lastTransitionTime":"2026-01-29T16:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.043419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.043463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.043476 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.043498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.043514 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.147263 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.147392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.147420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.147452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.147483 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.249962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.250038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.250056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.250078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.250096 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.353135 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.353197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.353215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.353236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.353252 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.455729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.455787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.455797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.455811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.455821 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.564282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.564406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.564432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.564466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.564491 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.594882 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:54:39.401223658 +0000 UTC Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.666868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.666911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.666920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.666938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.666949 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.769517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.769559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.769570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.769591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.769603 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.872543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.872590 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.872610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.872634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.872652 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.975703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.975765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.975777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.975794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:45 crc kubenswrapper[4886]: I0129 16:22:45.975806 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:45Z","lastTransitionTime":"2026-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.078908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.079000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.079022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.079053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.079075 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.181880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.181929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.181947 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.181979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.181992 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.284610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.284685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.284708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.284740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.284768 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.387671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.387713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.387724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.387740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.387751 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.490462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.490507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.490518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.490536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.490547 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.594754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.594809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.594819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.594835 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.594846 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.595017 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:26:29.064879399 +0000 UTC Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.615613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:46 crc kubenswrapper[4886]: E0129 16:22:46.615744 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.616141 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:46 crc kubenswrapper[4886]: E0129 16:22:46.616191 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.616238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:46 crc kubenswrapper[4886]: E0129 16:22:46.616284 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.616413 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:46 crc kubenswrapper[4886]: E0129 16:22:46.616461 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.697666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.697721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.697741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.697765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.697781 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.800316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.800382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.800394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.800410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.800424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.883908 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:46 crc kubenswrapper[4886]: E0129 16:22:46.884076 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:46 crc kubenswrapper[4886]: E0129 16:22:46.884162 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:22:50.884141585 +0000 UTC m=+53.792860857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.912053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.912113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.912130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.912154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:46 crc kubenswrapper[4886]: I0129 16:22:46.912170 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:46Z","lastTransitionTime":"2026-01-29T16:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.014194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.014266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.014284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.014307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.014351 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.046206 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/0.log" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.048964 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300" exitCode=1 Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.049003 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.049686 4886 scope.go:117] "RemoveContainer" containerID="142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.062611 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.075550 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.092600 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.105495 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.117091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.117136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.117147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.117163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.117176 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.122486 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.135995 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.149917 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.167086 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.181728 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.197451 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.213674 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.219506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.219558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.219569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.219587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.219600 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.226248 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.236738 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.245612 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.254003 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.264998 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.321852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.321880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.321889 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.321902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.321911 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.424581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.424639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.424656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.424679 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.424696 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.527644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.527711 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.527734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.527766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.527790 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.596119 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:30:14.604474331 +0000 UTC Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.630737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.630991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.631120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.631251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.631438 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.734381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.734451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.734471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.734508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.734527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.837168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.837241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.837259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.837285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.837302 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.940081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.940112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.940120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.940133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:47 crc kubenswrapper[4886]: I0129 16:22:47.940142 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:47Z","lastTransitionTime":"2026-01-29T16:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.041922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.041946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.041955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.041969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.041978 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.054390 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/0.log" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.064145 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.143951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.143990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.144001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.144018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.144030 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.246396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.246453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.246465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.246484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.246495 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.348872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.348939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.348963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.348992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.349014 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.452474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.452520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.452537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.452560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.452577 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.555408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.555454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.555470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.555489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.555505 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.596814 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:09:22.539557043 +0000 UTC Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.614735 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.614771 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:48 crc kubenswrapper[4886]: E0129 16:22:48.614897 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.615203 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:48 crc kubenswrapper[4886]: E0129 16:22:48.615298 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.615512 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:48 crc kubenswrapper[4886]: E0129 16:22:48.615523 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:48 crc kubenswrapper[4886]: E0129 16:22:48.615593 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.632934 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.651359 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.658751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.658796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.658808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.658825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.658839 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.664913 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.681009 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.699056 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.712835 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.752302 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.761077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.761125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.761139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.761158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.761174 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.763389 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.780435 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.794508 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.810681 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.824274 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.840389 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.859432 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.863228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.863268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.863279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.863296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.863308 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.871915 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.884787 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.966365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.966423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.966440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.966464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:48 crc kubenswrapper[4886]: I0129 16:22:48.966486 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:48Z","lastTransitionTime":"2026-01-29T16:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.067939 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.068271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.068292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.068300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.068311 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.068319 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.083393 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.101515 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.113939 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.128851 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.140188 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.160270 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.171686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.172019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.172120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.172208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.172289 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.183084 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.198287 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.214188 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.231884 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.252537 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.271174 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.275133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.275160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.275170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.275187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.275199 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.286838 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.301569 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.340250 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.356415 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.378537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.378602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.378622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.378693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.378713 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.481248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.481307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.481315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.481354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.481372 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.585245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.585541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.585554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.585570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.585581 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.597418 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:43:33.257310421 +0000 UTC Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.614898 4886 scope.go:117] "RemoveContainer" containerID="8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.689783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.689849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.689864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.689890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.689907 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.792376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.792411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.792419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.792442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.792454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.894809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.894855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.894872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.894888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.894898 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.909229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.909282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.909292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.909307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.909317 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: E0129 16:22:49.919620 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.923352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.923381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.923391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.923405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.923414 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: E0129 16:22:49.936632 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.939951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.939996 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.940007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.940024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.940034 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: E0129 16:22:49.951829 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.954917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.954976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.954993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.955014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.955029 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: E0129 16:22:49.966376 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.969969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.970008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.970021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.970041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.970052 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:49 crc kubenswrapper[4886]: E0129 16:22:49.986181 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:49 crc kubenswrapper[4886]: E0129 16:22:49.986350 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.997586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.997618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.997626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.997640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:49 crc kubenswrapper[4886]: I0129 16:22:49.997649 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:49Z","lastTransitionTime":"2026-01-29T16:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.100698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.100751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.100762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.100780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.100793 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.204295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.204392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.204408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.204437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.204455 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.307923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.307967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.307979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.307994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.308005 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.410358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.410397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.410409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.410425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.410436 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.513660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.513709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.513721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.513738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.513750 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.597716 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:21:43.978926911 +0000 UTC Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.614126 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.614171 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:50 crc kubenswrapper[4886]: E0129 16:22:50.614225 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.614116 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.614243 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:50 crc kubenswrapper[4886]: E0129 16:22:50.614323 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:50 crc kubenswrapper[4886]: E0129 16:22:50.614458 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:50 crc kubenswrapper[4886]: E0129 16:22:50.614570 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.616277 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.616307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.616318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.616349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.616360 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.718699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.718772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.718790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.718813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.718829 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.821065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.821142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.821167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.821199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.821222 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.917254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:50 crc kubenswrapper[4886]: E0129 16:22:50.917457 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:50 crc kubenswrapper[4886]: E0129 16:22:50.917555 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:22:58.917532982 +0000 UTC m=+61.826252264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.923761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.923833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.923851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.923877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:50 crc kubenswrapper[4886]: I0129 16:22:50.923895 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:50Z","lastTransitionTime":"2026-01-29T16:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.026308 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.026429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.026455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.026485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.026504 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.075704 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.077789 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.129287 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.129462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.129492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.129526 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.129550 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.231392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.231444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.231455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.231473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.231485 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.334925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.334971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.334983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.335001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.335013 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.438171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.438235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.438251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.438277 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.438294 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.541198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.541248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.541260 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.541279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.541292 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.598754 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:13:24.731327363 +0000 UTC Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.643663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.643728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.643739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.643756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.643770 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.746484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.746525 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.746533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.746549 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.746558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.849489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.849545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.849553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.849569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.849578 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.952433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.952490 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.952502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.952522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:51 crc kubenswrapper[4886]: I0129 16:22:51.952537 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:51Z","lastTransitionTime":"2026-01-29T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.055687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.055760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.055779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.055807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.055828 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.083815 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/1.log" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.084383 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/0.log" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.087401 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c" exitCode=1 Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.087466 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.087514 4886 scope.go:117] "RemoveContainer" containerID="142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.087841 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.088530 4886 scope.go:117] "RemoveContainer" containerID="21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c" Jan 29 16:22:52 crc kubenswrapper[4886]: E0129 16:22:52.088834 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.105126 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.124945 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.137378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.149509 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.158299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.158355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.158369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.158387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.158398 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.161884 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.175306 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.187111 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.199094 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.215144 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.230613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.248440 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.260548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.260598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.260610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.260627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.260639 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.261629 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.274378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.294354 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.307214 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.322143 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.337651 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.348515 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.360246 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.363090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.363310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.363353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.363390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.363403 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.376367 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.390672 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.405575 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.426441 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.435534 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.446147 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.456394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.464708 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.465404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.465424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.465434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.465448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.465457 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.474696 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.483800 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.494023 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.503890 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.513700 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.568149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.568191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.568200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.568214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.568223 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.599400 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:53:25.164112576 +0000 UTC Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.614764 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.614796 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:52 crc kubenswrapper[4886]: E0129 16:22:52.614895 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.614907 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:52 crc kubenswrapper[4886]: E0129 16:22:52.615103 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.615211 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:52 crc kubenswrapper[4886]: E0129 16:22:52.615289 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:52 crc kubenswrapper[4886]: E0129 16:22:52.615429 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.670098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.670152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.670163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.670179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.670192 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.772940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.772986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.772999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.773016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.773028 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.875568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.875841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.876110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.876247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.876283 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.978888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.978938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.978951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.978967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:52 crc kubenswrapper[4886]: I0129 16:22:52.979013 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:52Z","lastTransitionTime":"2026-01-29T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.081491 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.081524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.081533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.081547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.081556 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.096920 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/1.log" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.184449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.184507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.184519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.184538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.184551 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.288082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.288139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.288158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.288189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.288210 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.391400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.391448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.391465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.391486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.391500 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.494758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.494836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.494860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.494891 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.494914 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.598436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.598493 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.598505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.598531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.598545 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.599859 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:51:32.482769555 +0000 UTC Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.701310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.701357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.701365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.701380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.701390 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.803545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.803601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.803614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.803634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.803646 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.906196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.906228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.906238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.906253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:53 crc kubenswrapper[4886]: I0129 16:22:53.906263 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:53Z","lastTransitionTime":"2026-01-29T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.008891 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.008935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.008946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.008964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.008976 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.111216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.111249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.111258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.111272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.111281 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.214074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.214395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.214404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.214418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.214426 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.316804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.316847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.316858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.316874 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.316884 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.419396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.419441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.419454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.419468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.419478 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.463375 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.463520 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:23:26.463502687 +0000 UTC m=+89.372221959 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.463937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.464139 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464189 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464366 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464380 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464417 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:26.464407832 +0000 UTC m=+89.373127094 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464243 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464536 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:26.464515115 +0000 UTC m=+89.373234427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.464846 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.465049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.464921 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.465444 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.465573 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.465737 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:26.465716809 +0000 UTC m=+89.374436121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.465136 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.466037 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:26.466020057 +0000 UTC m=+89.374739359 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.521917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.521975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.521991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.522015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.522028 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.600046 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:48:49.264226427 +0000 UTC Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.614644 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.614765 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.614803 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.615018 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.615248 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.615601 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.615775 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:54 crc kubenswrapper[4886]: E0129 16:22:54.616267 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.624269 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.624690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.624799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.624922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.625011 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.727204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.727237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.727245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.727259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.727270 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.830428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.830500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.830522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.830546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.830563 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.932483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.932535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.932545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.932561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:54 crc kubenswrapper[4886]: I0129 16:22:54.932572 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:54Z","lastTransitionTime":"2026-01-29T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.035483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.035538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.035549 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.035572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.035583 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.137208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.137258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.137271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.137289 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.137305 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.239935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.239972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.239984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.240003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.240015 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.341957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.342235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.342389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.342522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.342620 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.445280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.445538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.445640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.445718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.445786 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.548865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.548951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.548965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.548985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.548998 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.601092 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:51:30.215213567 +0000 UTC Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.652395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.652454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.652473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.652498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.652518 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.754622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.755192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.755216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.755236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.755250 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.858149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.858209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.858222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.858238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.858250 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.928245 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.937790 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.941092 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.951805 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.960522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.960587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.960603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.960625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.960642 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:55Z","lastTransitionTime":"2026-01-29T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.965840 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.977625 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:55 crc kubenswrapper[4886]: I0129 16:22:55.992045 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.007241 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.019267 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.029402 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.038997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.049978 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.062496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.062555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.062568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.062585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.062597 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.063136 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.075463 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.085548 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.094663 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.130855 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.146366 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.164688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.164727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.164739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.164757 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.164769 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.267425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.267690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.267777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.267928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.268145 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.371096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.371129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.371139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.371154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.371164 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.473626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.473684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.473724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.473741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.473750 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.576200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.576496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.576627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.576741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.576815 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.601776 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:24:23.180919507 +0000 UTC Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.614147 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.614469 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:56 crc kubenswrapper[4886]: E0129 16:22:56.614548 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.614616 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.614642 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:56 crc kubenswrapper[4886]: E0129 16:22:56.614762 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:56 crc kubenswrapper[4886]: E0129 16:22:56.614848 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:56 crc kubenswrapper[4886]: E0129 16:22:56.614994 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.679965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.680014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.680022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.680037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.680047 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.783725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.784248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.784312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.784398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.784454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.887157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.887223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.887252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.887288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.887312 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.990004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.990052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.990064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.990082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:56 crc kubenswrapper[4886]: I0129 16:22:56.990096 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:56Z","lastTransitionTime":"2026-01-29T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.093852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.093906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.093916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.093945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.093954 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.196002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.196068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.196092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.196122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.196145 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.299550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.299628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.299650 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.299680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.299704 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.402904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.403417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.403622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.403846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.404055 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.507086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.507153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.507165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.507187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.507205 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.602048 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:20:05.03960931 +0000 UTC Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.609001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.609161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.609445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.609554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.609616 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.712786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.712825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.712836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.712855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.712866 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.816034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.817375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.817410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.817439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.817454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.923346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.923392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.923405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.923425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:57 crc kubenswrapper[4886]: I0129 16:22:57.923437 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:57Z","lastTransitionTime":"2026-01-29T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.026433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.026821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.027156 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.027380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.027555 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.130764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.130810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.130825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.130848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.130865 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.233580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.233654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.233670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.233703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.233722 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.336655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.336704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.336719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.336739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.336756 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.439542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.439599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.439617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.439640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.439659 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.543197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.543279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.543303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.543372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.543411 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.602639 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:46:46.792469355 +0000 UTC Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.614364 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.614450 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:22:58 crc kubenswrapper[4886]: E0129 16:22:58.614895 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.614477 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:22:58 crc kubenswrapper[4886]: E0129 16:22:58.615129 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:22:58 crc kubenswrapper[4886]: E0129 16:22:58.615017 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.614469 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:22:58 crc kubenswrapper[4886]: E0129 16:22:58.615388 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.636922 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.645452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.645502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.645513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.645535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.645550 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.648739 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.669896 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.684243 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.698241 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.716179 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.733195 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.746928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.746964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.746975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.746989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.746999 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.752218 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.762817 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.773832 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.785636 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.798064 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.806509 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.818039 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.829071 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.840695 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.849014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.849043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.849051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.849065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.849075 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.853430 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.920610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:22:58 crc kubenswrapper[4886]: E0129 16:22:58.920751 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:58 crc kubenswrapper[4886]: E0129 16:22:58.920807 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:23:14.920793989 +0000 UTC m=+77.829513261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.951174 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.951205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.951215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.951231 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:58 crc kubenswrapper[4886]: I0129 16:22:58.951241 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:58Z","lastTransitionTime":"2026-01-29T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.054252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.054291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.054311 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.054343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.054357 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.162685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.162868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.162880 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.162896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.162908 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.265811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.265866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.265889 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.265916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.265937 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.368720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.368765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.368774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.368788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.368798 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.470538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.470596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.470613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.470636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.470653 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.576984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.577078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.577106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.577141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.577163 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.603413 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:42:27.804113374 +0000 UTC Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.679295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.679343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.679352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.679367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.679394 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.781871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.781911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.781922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.781939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.781951 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.884850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.884883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.884893 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.884909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.884917 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.987240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.987314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.987358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.987383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:22:59 crc kubenswrapper[4886]: I0129 16:22:59.987401 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:22:59Z","lastTransitionTime":"2026-01-29T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.063116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.063149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.063157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.063175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.063184 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.081656 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.085353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.085395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.085405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.085420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.085432 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.103418 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.107264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.107300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.107312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.107347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.107361 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.129129 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.132920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.132955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.132963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.132976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.132985 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.149631 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.153963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.154041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.154061 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.154087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.154109 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.168371 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.168500 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.170899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.170937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.170946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.170960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.170969 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.273821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.273862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.273870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.273886 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.273897 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.376746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.376812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.376829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.376853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.376922 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.479229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.479268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.479279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.479295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.479306 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.581392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.581423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.581431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.581445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.581453 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.603754 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:34:18.826402186 +0000 UTC Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.614687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.614845 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.614690 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.614896 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.614687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.614965 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.615056 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:00 crc kubenswrapper[4886]: E0129 16:23:00.615125 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.685161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.685203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.685213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.685229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.685241 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.788590 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.788684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.788710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.788743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.788761 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.890828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.890900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.890923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.890938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.890947 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.993834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.993877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.993889 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.993907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:00 crc kubenswrapper[4886]: I0129 16:23:00.993919 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:00Z","lastTransitionTime":"2026-01-29T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.096795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.096842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.096855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.096872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.096884 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.200126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.200165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.200173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.200189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.200197 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.303464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.303527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.303555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.303586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.303611 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.406642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.406714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.406737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.406771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.406796 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.509751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.509832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.509855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.509887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.509910 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.604145 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:12:29.034227397 +0000 UTC Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.613794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.613847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.613861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.613897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.613925 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.716404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.716453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.716469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.716489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.716504 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.818776 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.818829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.818842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.818886 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.818902 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.920991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.921031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.921041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.921057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:01 crc kubenswrapper[4886]: I0129 16:23:01.921068 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:01Z","lastTransitionTime":"2026-01-29T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.023862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.023900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.023911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.023927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.023937 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.126763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.126826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.126843 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.126871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.126889 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.229563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.229602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.229610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.229624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.229633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.332509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.332545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.332553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.332570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.332579 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.435056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.435093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.435101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.435116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.435126 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.537945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.537987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.537997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.538014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.538024 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.604627 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:15:18.811759963 +0000 UTC Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.614123 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.614174 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:02 crc kubenswrapper[4886]: E0129 16:23:02.614298 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.614383 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.614406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:02 crc kubenswrapper[4886]: E0129 16:23:02.614528 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:02 crc kubenswrapper[4886]: E0129 16:23:02.614727 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:02 crc kubenswrapper[4886]: E0129 16:23:02.614930 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.640027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.640104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.640118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.640134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.640168 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.742359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.742408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.742419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.742436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.742473 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.846359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.846404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.846415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.846430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.846442 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.948365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.948398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.948406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.948420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:02 crc kubenswrapper[4886]: I0129 16:23:02.948430 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:02Z","lastTransitionTime":"2026-01-29T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.050858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.050898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.050907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.050922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.050931 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.153165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.153193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.153202 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.153217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.153227 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.255615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.255659 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.255670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.255689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.255701 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.357647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.357685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.357694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.357708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.357718 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.460518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.460570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.460583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.460602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.460620 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.562910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.562935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.562945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.562958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.562966 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.605082 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:51:33.218352702 +0000 UTC Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.665617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.665689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.665707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.665733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.665751 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.767705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.767743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.767753 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.767769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.767779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.871276 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.871342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.871354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.871371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.871382 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.974187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.974216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.974226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.974240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:03 crc kubenswrapper[4886]: I0129 16:23:03.974250 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:03Z","lastTransitionTime":"2026-01-29T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.077255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.077301 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.077316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.077364 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.077387 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.179297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.179347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.179356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.179372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.179381 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.281962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.282038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.282056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.282075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.282089 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.384400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.384442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.384453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.384470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.384481 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.486482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.486514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.486522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.486537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.486547 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.588451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.588505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.588522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.588546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.588561 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.605762 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:37:31.179004155 +0000 UTC Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.614408 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.614468 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.614531 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:04 crc kubenswrapper[4886]: E0129 16:23:04.614613 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.614678 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:04 crc kubenswrapper[4886]: E0129 16:23:04.614811 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:04 crc kubenswrapper[4886]: E0129 16:23:04.614917 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:04 crc kubenswrapper[4886]: E0129 16:23:04.614980 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.691438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.691488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.691505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.691529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.691548 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.793687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.793727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.793738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.793755 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.793768 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.896211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.896249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.896268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.896292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.896303 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.998555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.998602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.998614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.998632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:04 crc kubenswrapper[4886]: I0129 16:23:04.998644 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:04Z","lastTransitionTime":"2026-01-29T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.101713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.101758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.101768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.101787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.101801 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.206570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.206645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.206661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.206682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.206717 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.309165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.309315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.309368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.309385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.309396 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.412227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.412297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.412321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.412396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.412419 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.515126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.515162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.515175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.515191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.515203 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.606741 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:13:10.321892883 +0000 UTC Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.617194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.617228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.617239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.617254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.617265 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.719618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.719655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.719663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.719680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.719689 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.821771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.821825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.821845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.821870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.821890 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.923962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.923990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.924000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.924015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:05 crc kubenswrapper[4886]: I0129 16:23:05.924026 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:05Z","lastTransitionTime":"2026-01-29T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.027003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.027319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.027356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.027383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.027402 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.129535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.129559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.129567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.129580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.129589 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.231089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.231123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.231133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.231147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.231159 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.333864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.333907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.333919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.333939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.333951 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.436780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.436829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.436837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.436851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.436864 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.539278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.539318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.539368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.539385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.539393 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.607646 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:08:00.287275542 +0000 UTC Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.614808 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.614844 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.614893 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.614952 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:06 crc kubenswrapper[4886]: E0129 16:23:06.615029 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:06 crc kubenswrapper[4886]: E0129 16:23:06.615158 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:06 crc kubenswrapper[4886]: E0129 16:23:06.615285 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:06 crc kubenswrapper[4886]: E0129 16:23:06.615380 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.641808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.641986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.642057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.642146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.642210 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.745381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.745419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.745428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.745443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.745452 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.847499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.847832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.848022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.848250 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.848371 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.950933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.950978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.950989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.951005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:06 crc kubenswrapper[4886]: I0129 16:23:06.951016 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:06Z","lastTransitionTime":"2026-01-29T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.053695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.054215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.054284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.054376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.054456 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.156166 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.156212 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.156222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.156240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.156249 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.259974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.260035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.260049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.260073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.260088 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.367958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.368252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.368362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.368467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.368554 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.471047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.471258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.471376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.471487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.471581 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.573432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.573694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.573790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.573891 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.573975 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.605462 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.607869 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:31:46.114136457 +0000 UTC Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.615028 4886 scope.go:117] "RemoveContainer" containerID="21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.622594 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.639525 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.654720 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.668283 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.676418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.676572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.676658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.676756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.676849 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.681076 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.698058 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.711364 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.725617 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.737796 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.756089 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://142e4661b770aaa69b754a25ef64f05a9d6f2fe9b9ebb196d61675eec6bc2300\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:46Z\\\",\\\"message\\\":\\\"ent handler 1 for removal\\\\nI0129 16:22:44.236556 6133 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:22:44.236443 6133 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:22:44.236612 6133 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:22:44.236556 6133 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:22:44.236638 6133 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:22:44.236628 6133 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:22:44.236676 6133 factory.go:656] Stopping watch factory\\\\nI0129 16:22:44.236675 6133 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236728 6133 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:22:44.236737 6133 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:22:44.236790 6133 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:22:44.236807 6133 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.765387 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.776023 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.778559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.778587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.778595 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.778609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.778618 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.786091 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.797772 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.806905 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.815828 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.826821 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.838683 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.849730 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.860958 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.869365 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.880998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.881044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.881056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.881073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.881092 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.888476 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.898073 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.913073 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.929878 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.944824 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.957778 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.968928 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.980712 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.997130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.997185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.997196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.997214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.997229 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:07Z","lastTransitionTime":"2026-01-29T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:07 crc kubenswrapper[4886]: I0129 16:23:07.999288 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.009698 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.019004 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.028560 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.040932 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.100756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.100792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.100799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.100814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.100822 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.161914 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/1.log" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.164158 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.165215 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.178378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.189311 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.204231 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.204275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.204288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.204307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.204319 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.214218 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.235577 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.252539 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.281083 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.306657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.306707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.306722 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.306935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.306947 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.308396 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.326302 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.339123 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.350576 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.364163 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.377384 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.390284 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.403562 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.409794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.409836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.409847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.409864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.409874 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.414929 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.431535 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.445749 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.512371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.512671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.512681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.512695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.512705 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.608138 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:24:22.926380969 +0000 UTC Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.614111 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.614161 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.614158 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.614109 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:08 crc kubenswrapper[4886]: E0129 16:23:08.614267 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:08 crc kubenswrapper[4886]: E0129 16:23:08.614398 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:08 crc kubenswrapper[4886]: E0129 16:23:08.614486 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:08 crc kubenswrapper[4886]: E0129 16:23:08.614554 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.615925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.615944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.615952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.615963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.616013 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.626915 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.637164 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.670636 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.685610 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.697964 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.709595 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.718207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.718241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.718252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.718267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.718281 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.719280 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.731050 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.744989 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.755939 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.765180 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.777736 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.798042 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.807674 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.817937 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.820861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.820892 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.820903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.820920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.820931 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.828410 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.835996 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.925163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.925518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.925872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.926217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:08 crc kubenswrapper[4886]: I0129 16:23:08.926580 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:08Z","lastTransitionTime":"2026-01-29T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.029487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.029531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.029543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.029560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.029571 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.133163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.133214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.133225 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.133241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.133251 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.169165 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/2.log" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.169825 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/1.log" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.171967 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0" exitCode=1 Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.172005 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.172050 4886 scope.go:117] "RemoveContainer" containerID="21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.172579 4886 scope.go:117] "RemoveContainer" containerID="337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0" Jan 29 16:23:09 crc kubenswrapper[4886]: E0129 16:23:09.172723 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.186433 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.203742 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.218172 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.231266 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.235253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.235494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.235514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.235534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.235545 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.241663 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.258241 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21734fb20c50ad0defe1dc5f098c4d5a6406a0313fb256691eef65eef2b91b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:22:51Z\\\",\\\"message\\\":\\\"-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:22:50.038135 6410 services_controller.go:444] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:22:50.038135 6410 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 16:22:50.038141 6410 services_controller.go:445] Built service openshift-marketplace/marketplace-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0129 16:22:50.038236 6410 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.268422 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.280569 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.294598 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.305608 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.317398 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.326264 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.336766 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.337682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.337730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.337743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.337760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.337772 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.348980 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.362613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.374870 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.388603 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.439846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.439898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.439919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.439946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.439966 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.542780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.542824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.542834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.542849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.542859 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.608437 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:30:22.666109815 +0000 UTC Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.644934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.644976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.644987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.645002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.645011 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.748198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.748239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.748265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.748287 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.748301 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.851867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.852099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.852161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.852403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.852507 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.954779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.954810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.954819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.954832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:09 crc kubenswrapper[4886]: I0129 16:23:09.954840 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:09Z","lastTransitionTime":"2026-01-29T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.057738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.057782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.057794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.057811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.057822 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.161196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.161231 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.161239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.161254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.161264 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.176430 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/2.log" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.180589 4886 scope.go:117] "RemoveContainer" containerID="337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.180868 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.195607 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.210693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.210751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.210769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.210795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.210813 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.219031 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.227935 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.231748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.231797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.231806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.231824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.232022 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.236222 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.246750 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.250110 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.250434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.250617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.250627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.250642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.250652 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.262970 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.266233 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.269472 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.269646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.269804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.269932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.270068 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.274764 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.282974 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.285636 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.286448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.286488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.286503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.286522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.286533 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.295484 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.298011 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.298600 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.300051 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.300308 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.300521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.300678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.300995 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.307356 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.322115 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.335213 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.345678 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.357140 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.374870 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.397221 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.404581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.404806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.404993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.405159 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.405411 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.410713 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.427173 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.508423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.508467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.508477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.508494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.508503 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.609658 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:02:07.280522604 +0000 UTC Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.610984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.611017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.611029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.611046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.611057 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.615552 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.615657 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.615606 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.615798 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.615832 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.615991 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.615913 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:10 crc kubenswrapper[4886]: E0129 16:23:10.616179 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.714153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.714194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.714205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.714222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.714234 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.816254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.816298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.816306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.816321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.816345 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.918987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.919035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.919048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.919066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:10 crc kubenswrapper[4886]: I0129 16:23:10.919077 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:10Z","lastTransitionTime":"2026-01-29T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.022146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.022215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.022236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.022261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.022279 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.124691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.124728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.124736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.124750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.124758 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.227648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.227806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.227826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.227850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.227869 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.330703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.330763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.330781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.330806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.330823 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.433816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.433854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.433868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.433935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.433953 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.536635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.536686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.536703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.536724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.536741 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.610209 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:40:02.769344502 +0000 UTC Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.638977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.639007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.639014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.639027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.639035 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.741457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.741528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.741583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.741602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.741615 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.843389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.843429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.843439 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.843454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.843466 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.945662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.945720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.945730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.945748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:11 crc kubenswrapper[4886]: I0129 16:23:11.945758 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:11Z","lastTransitionTime":"2026-01-29T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.047884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.047923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.047935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.047955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.047966 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.151147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.151178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.151186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.151203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.151213 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.253012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.253063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.253072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.253086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.253094 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.355216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.355254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.355262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.355276 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.355285 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.458044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.458096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.458114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.458138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.458153 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.561433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.561497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.561520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.561548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.561570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.611211 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:59:48.850023195 +0000 UTC Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.614709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.614746 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.614771 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.614973 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:12 crc kubenswrapper[4886]: E0129 16:23:12.615098 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:12 crc kubenswrapper[4886]: E0129 16:23:12.615188 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:12 crc kubenswrapper[4886]: E0129 16:23:12.615192 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:12 crc kubenswrapper[4886]: E0129 16:23:12.615426 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.664074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.664158 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.664184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.664214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.664236 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.767353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.767405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.767421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.767444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.767461 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.870397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.870450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.870467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.870492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.870510 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.972946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.973002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.973010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.973024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:12 crc kubenswrapper[4886]: I0129 16:23:12.973032 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:12Z","lastTransitionTime":"2026-01-29T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.075714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.075747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.075756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.075771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.075779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.178203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.178262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.178279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.178299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.178315 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.281196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.281255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.281290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.281319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.281375 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.384469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.384526 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.384539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.384557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.384569 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.487420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.487463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.487475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.487492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.487521 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.590546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.590592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.590609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.590633 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.590649 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.612403 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:47:22.10040675 +0000 UTC Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.693560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.693606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.693621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.693640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.693654 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.796708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.796752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.796764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.796780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.796791 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.898833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.898867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.898884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.898906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:13 crc kubenswrapper[4886]: I0129 16:23:13.898918 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:13Z","lastTransitionTime":"2026-01-29T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.001728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.001760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.001770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.001786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.001797 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.104927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.104964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.104979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.105000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.105014 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.208235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.208316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.208382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.208424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.208446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.311110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.311155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.311171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.311195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.311212 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.414286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.414367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.414386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.414410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.414427 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.517497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.517546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.517563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.517587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.517604 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.613632 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:23:30.793242761 +0000 UTC Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.615621 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.615711 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:14 crc kubenswrapper[4886]: E0129 16:23:14.615805 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.615724 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:14 crc kubenswrapper[4886]: E0129 16:23:14.615905 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.615961 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:14 crc kubenswrapper[4886]: E0129 16:23:14.616125 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:14 crc kubenswrapper[4886]: E0129 16:23:14.616234 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.619119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.619163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.619172 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.619184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.619192 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.721584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.721663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.721691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.721718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.721736 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.825073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.825131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.825144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.825162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.825175 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.928814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.929082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.929231 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.929390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.929520 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:14Z","lastTransitionTime":"2026-01-29T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:14 crc kubenswrapper[4886]: I0129 16:23:14.989611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:14 crc kubenswrapper[4886]: E0129 16:23:14.989821 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:23:14 crc kubenswrapper[4886]: E0129 16:23:14.989894 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:23:46.989870774 +0000 UTC m=+109.898590086 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.032473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.032544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.032567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.032594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.032611 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.135711 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.135812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.135837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.135867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.135889 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.239014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.239063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.239080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.239105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.239124 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.342872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.342944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.342964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.342989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.343007 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.446107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.446189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.446201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.446220 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.446231 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.548986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.549065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.549091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.549124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.549150 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.614222 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:25:44.977416722 +0000 UTC Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.651149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.651200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.651215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.651235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.651250 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.753163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.753223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.753239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.753261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.753277 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.856661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.856706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.856720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.856740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.856767 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.959842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.960131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.960211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.960315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:15 crc kubenswrapper[4886]: I0129 16:23:15.960418 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:15Z","lastTransitionTime":"2026-01-29T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.063309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.063392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.063404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.063420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.063433 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.166838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.166905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.166929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.166960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.166983 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.269765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.269807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.269817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.269834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.269847 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.372250 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.372320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.372350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.372368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.372379 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.475175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.475237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.475256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.475282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.475301 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.578088 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.578124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.578136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.578152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.578162 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.614129 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.614180 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.614133 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:16 crc kubenswrapper[4886]: E0129 16:23:16.614354 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:16 crc kubenswrapper[4886]: E0129 16:23:16.614406 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:16 crc kubenswrapper[4886]: E0129 16:23:16.614512 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.614488 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:21:22.393344362 +0000 UTC Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.614724 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:16 crc kubenswrapper[4886]: E0129 16:23:16.614793 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.680932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.680975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.680992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.681016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.681033 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.784372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.784467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.784486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.784512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.784529 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.887971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.888041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.888063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.888088 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.888105 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.990868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.990926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.990943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.990965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:16 crc kubenswrapper[4886]: I0129 16:23:16.991111 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:16Z","lastTransitionTime":"2026-01-29T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.094250 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.094313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.094357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.094387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.094405 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.197693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.197755 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.197805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.197830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.197847 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.301823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.301879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.301895 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.301920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.301944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.405415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.405463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.405476 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.405494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.405507 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.508031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.508094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.508117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.508144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.508162 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.610521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.610605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.610627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.610656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.610676 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.615563 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:32:48.217782089 +0000 UTC Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.713760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.713837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.713860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.713890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.713912 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.816969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.817003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.817011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.817025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.817034 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.919660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.919700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.919713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.919730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:17 crc kubenswrapper[4886]: I0129 16:23:17.919742 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:17Z","lastTransitionTime":"2026-01-29T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.022352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.022657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.022828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.022965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.023088 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.126467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.126505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.126518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.126536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.126548 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.229256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.229359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.229387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.229418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.229441 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.332090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.332117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.332127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.332140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.332149 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.434906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.434961 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.434978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.435001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.435020 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.537823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.537899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.537918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.537987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.538081 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.614704 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:18 crc kubenswrapper[4886]: E0129 16:23:18.614983 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.615202 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.615284 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:18 crc kubenswrapper[4886]: E0129 16:23:18.615678 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.615316 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.615712 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:09:22.674337171 +0000 UTC Jan 29 16:23:18 crc kubenswrapper[4886]: E0129 16:23:18.615812 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:18 crc kubenswrapper[4886]: E0129 16:23:18.616005 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.638219 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.644279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.644450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.644482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.644563 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.644589 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.662454 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.684161 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.706072 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.721709 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.736232 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.747254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.747299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.747312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.747352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.747365 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.754164 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.765470 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.779926 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.793639 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.810651 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.834298 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.852868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.853222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.853471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.853697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.853986 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.866895 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.882369 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.896611 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.917630 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.932316 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.956636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.956702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.956716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.956738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:18 crc kubenswrapper[4886]: I0129 16:23:18.956751 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:18Z","lastTransitionTime":"2026-01-29T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.059300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.059419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.059447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.059479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.059502 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.162310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.162417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.162455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.162474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.162486 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.265506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.265570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.265588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.265616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.265671 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.367469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.367506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.367517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.367533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.367545 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.470467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.470504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.470515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.470529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.470538 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.573532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.573613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.573642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.573671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.573693 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.616418 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:25:09.887049488 +0000 UTC Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.677069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.677119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.677132 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.677151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.677162 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.779396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.779430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.779446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.779465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.779479 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.881734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.881808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.881829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.881858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.881880 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.984540 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.984622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.984635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.984651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:19 crc kubenswrapper[4886]: I0129 16:23:19.984663 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:19Z","lastTransitionTime":"2026-01-29T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.087745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.087803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.087822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.087847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.087875 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.191449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.191507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.191526 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.191552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.191571 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.215845 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/0.log" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.215895 4886 generic.go:334] "Generic (PLEG): container finished" podID="b415d17e-f329-40e7-8a3f-32881cb5347a" containerID="91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df" exitCode=1 Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.215924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerDied","Data":"91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.216286 4886 scope.go:117] "RemoveContainer" containerID="91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.239534 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.254201 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.271091 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.286118 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.293512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.293814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.293920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.294022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.294113 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.300624 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.312625 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.332582 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.344510 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.356683 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.369317 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.381675 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.390771 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.396001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.396039 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.396050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.396067 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.396079 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.402481 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.412302 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.430005 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.444396 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.462302 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.469907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.470020 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.470097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.470173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.470244 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.482997 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.487678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.487825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.488000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.488110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.488199 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.505450 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.509917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.509965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.509983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.510009 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.510026 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.526266 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.530965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.531120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.531204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.531346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.531443 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.543790 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.551408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.551519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.551561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.551595 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.551619 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.573703 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.573929 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.576199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.576253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.576270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.576705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.576744 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.614733 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.614828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.614974 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.615300 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.615575 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.615624 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.615784 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:20 crc kubenswrapper[4886]: E0129 16:23:20.615799 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.616656 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:49:44.718033542 +0000 UTC Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.679274 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.679310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.679318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.679356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.679366 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.781917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.781961 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.781978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.782001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.782017 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.885527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.885589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.885606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.885631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.885663 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.989092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.989138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.989150 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.989168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:20 crc kubenswrapper[4886]: I0129 16:23:20.989183 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:20Z","lastTransitionTime":"2026-01-29T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.091928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.092010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.092038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.092066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.092087 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.195166 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.195238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.195272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.195302 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.195323 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.221993 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/0.log" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.222088 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerStarted","Data":"0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.235168 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.250131 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.262946 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.279051 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.293859 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.298023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.298063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.298073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.298092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.298104 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.311523 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.325507 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.342059 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.354831 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.379643 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.401035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.401072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.401080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.401094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.401103 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.403900 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.425028 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.437144 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.451402 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.463542 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.473785 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.495137 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.503772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.503844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.503871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.503904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.503930 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.606580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.606646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.606669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.606698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.606720 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.615511 4886 scope.go:117] "RemoveContainer" containerID="337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0" Jan 29 16:23:21 crc kubenswrapper[4886]: E0129 16:23:21.615819 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.617384 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:43:11.573311744 +0000 UTC Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.709861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.709951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.709976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.710007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.710031 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.813254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.813305 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.813355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.813382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.813400 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.916971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.917012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.917024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.917041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:21 crc kubenswrapper[4886]: I0129 16:23:21.917052 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:21Z","lastTransitionTime":"2026-01-29T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.020264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.020462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.020481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.020507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.020527 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.128422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.128493 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.128512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.128536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.128555 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.232013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.232079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.232101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.232163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.232417 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.335591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.335684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.335710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.335743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.335764 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.438017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.438053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.438063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.438079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.438090 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.541081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.541130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.541147 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.541170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.541186 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.614486 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.614559 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.614625 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.614708 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:22 crc kubenswrapper[4886]: E0129 16:23:22.614923 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:22 crc kubenswrapper[4886]: E0129 16:23:22.615172 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:22 crc kubenswrapper[4886]: E0129 16:23:22.615307 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:22 crc kubenswrapper[4886]: E0129 16:23:22.615446 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.618002 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:16:31.428860315 +0000 UTC Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.644260 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.644317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.644352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.644369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.644383 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.748039 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.748121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.748145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.748175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.748224 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.850925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.850955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.850962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.850976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.850985 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.954205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.954261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.954280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.954314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:22 crc kubenswrapper[4886]: I0129 16:23:22.954392 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:22Z","lastTransitionTime":"2026-01-29T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.056817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.056870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.056887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.056911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.056930 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.160083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.160145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.160168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.160198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.160220 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.263656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.263722 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.263744 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.263768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.263785 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.367758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.367795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.367806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.367825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.367836 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.471379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.471770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.471787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.471812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.471830 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.574584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.574653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.574675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.574704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.574727 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.618188 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:42:35.407731746 +0000 UTC Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.677824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.677885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.677902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.677926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.677944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.780786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.780826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.780841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.780861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.780876 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.883798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.883883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.883906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.883936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.883964 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.986687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.986752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.986776 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.986850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:23 crc kubenswrapper[4886]: I0129 16:23:23.986878 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:23Z","lastTransitionTime":"2026-01-29T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.089951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.090008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.090022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.090045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.090059 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.192727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.192769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.192779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.192799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.192822 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.294970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.295021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.295034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.295086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.295099 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.397829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.397894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.397911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.397935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.397952 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.500903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.500973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.500991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.501018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.501051 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.604458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.604523 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.604543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.604569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.604586 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.614360 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.614408 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:24 crc kubenswrapper[4886]: E0129 16:23:24.614510 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.614573 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.614595 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:24 crc kubenswrapper[4886]: E0129 16:23:24.614741 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:24 crc kubenswrapper[4886]: E0129 16:23:24.614604 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:24 crc kubenswrapper[4886]: E0129 16:23:24.614976 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.619031 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:40:36.306297954 +0000 UTC Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.708181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.708226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.708238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.708255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.708269 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.811116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.811185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.811213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.811247 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.811269 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.914557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.914610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.914628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.914654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:24 crc kubenswrapper[4886]: I0129 16:23:24.914674 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:24Z","lastTransitionTime":"2026-01-29T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.018589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.018655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.018724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.018761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.018784 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.122308 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.122388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.122403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.122422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.122433 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.225616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.225691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.225715 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.225746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.225770 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.328742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.328791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.328806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.328826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.328839 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.431714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.431766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.431790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.431819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.431840 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.535065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.535123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.535140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.535184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.535203 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.619998 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:01:02.031777429 +0000 UTC Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.630459 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.638219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.638288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.638322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.638447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.638470 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.741596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.741667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.741692 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.741720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.741742 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.844982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.845083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.845111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.845140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.845162 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.948082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.948162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.948187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.948214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:25 crc kubenswrapper[4886]: I0129 16:23:25.948232 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:25Z","lastTransitionTime":"2026-01-29T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.051575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.051654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.051680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.051714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.051739 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.155618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.155665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.155674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.155689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.155697 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.258521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.258577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.258595 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.258629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.258647 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.362203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.362268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.362295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.362320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.362369 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.464941 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.465000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.465019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.465046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.465063 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.514254 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514484 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:30.514447888 +0000 UTC m=+153.423167200 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.514564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.514639 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.514719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.514773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514821 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514853 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514877 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514904 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514915 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514982 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:30.514948332 +0000 UTC m=+153.423667644 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.514986 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.515018 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:30.514997164 +0000 UTC m=+153.423716466 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.515025 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.515042 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:30.515030205 +0000 UTC m=+153.423749507 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.515046 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.515134 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:30.515109417 +0000 UTC m=+153.423828729 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.567976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.568027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.568037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.568054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.568068 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.614434 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.614630 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.614867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.614917 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.615007 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.615182 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.616740 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:26 crc kubenswrapper[4886]: E0129 16:23:26.616815 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.620111 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:12:43.767951023 +0000 UTC Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.671390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.671496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.671516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.671577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.671596 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.775198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.775256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.775272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.775297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.775314 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.878504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.878546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.878558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.878576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.878587 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.980267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.980299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.980307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.980320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:26 crc kubenswrapper[4886]: I0129 16:23:26.980350 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:26Z","lastTransitionTime":"2026-01-29T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.082810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.082867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.082887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.082917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.082940 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.185649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.185723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.185742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.185766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.185784 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.288733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.288798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.288815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.288848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.288867 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.391716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.391766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.391778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.391798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.391811 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.495190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.495271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.495290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.495316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.495366 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.598919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.598974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.598984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.599001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.599011 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.621128 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:21:23.939802144 +0000 UTC Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.701197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.701281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.701307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.701408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.701434 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.803481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.803524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.803536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.803552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.803561 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.906168 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.906229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.906243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.906262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:27 crc kubenswrapper[4886]: I0129 16:23:27.906277 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:27Z","lastTransitionTime":"2026-01-29T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.009872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.009921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.009936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.009960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.009972 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.113218 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.113280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.113297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.113320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.113359 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.216426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.216496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.216516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.216544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.216562 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.319566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.319643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.319668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.319698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.319721 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.423011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.423082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.423099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.423122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.423141 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.526054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.526092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.526100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.526113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.526122 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.614813 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.614840 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.614880 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.614896 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:28 crc kubenswrapper[4886]: E0129 16:23:28.615030 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:28 crc kubenswrapper[4886]: E0129 16:23:28.615227 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:28 crc kubenswrapper[4886]: E0129 16:23:28.615446 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:28 crc kubenswrapper[4886]: E0129 16:23:28.615616 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.621290 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:05:00.371946265 +0000 UTC Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.629094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.629140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.629161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.629184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.629199 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.636569 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.649271 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.670187 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.686038 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.697124 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.715850 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.730900 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce44468-ba95-4390-a37a-88eb25fc5a52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09015f4cf412b00af42b12364de032e35bb3e11014cac2c07375cb3b2c24a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.732653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.732705 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.732714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.732730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.732762 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.752271 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.764298 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.777276 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.791817 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.803816 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.815304 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.826454 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.834825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.835046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.835119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.835188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.835264 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.840556 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.849102 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.876063 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.888409 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.938265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.938313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.938358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.938380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:28 crc kubenswrapper[4886]: I0129 16:23:28.938395 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:28Z","lastTransitionTime":"2026-01-29T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.040689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.040747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.040764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.040788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.040805 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.142884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.143369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.143589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.143777 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.143952 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.247056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.247110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.247128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.247153 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.247171 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.350117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.350191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.350215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.350245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.350268 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.453752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.453800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.453816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.453838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.453856 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.556157 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.556207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.556222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.556243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.556258 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.622195 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:03:06.031195046 +0000 UTC Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.658577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.658651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.658675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.658707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.658726 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.761097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.761129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.761138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.761152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.761162 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.864173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.864210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.864220 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.864235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.864272 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.967742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.967799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.967808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.967823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:29 crc kubenswrapper[4886]: I0129 16:23:29.967833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:29Z","lastTransitionTime":"2026-01-29T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.075596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.075654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.075671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.075695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.075712 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.179317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.179420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.179440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.179869 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.179927 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.281994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.282044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.282054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.282067 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.282075 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.384920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.384969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.384984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.385008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.385023 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.487369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.487409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.487417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.487432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.487441 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.589811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.589871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.589888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.589913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.589931 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.614280 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.614390 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.614426 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.614550 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.614822 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.614920 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.615042 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.615178 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.622641 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:03:41.697306356 +0000 UTC Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.692825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.692887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.692905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.692930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.692948 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.702822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.702889 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.702913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.702939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.702957 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.726054 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.731554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.731619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.731641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.731668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.731684 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.752238 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.757987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.758046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.758064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.758088 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.758103 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.772205 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.776954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.776998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.777047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.777070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.777085 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.796513 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.800733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.800771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.800782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.800800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.800814 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.825085 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:30 crc kubenswrapper[4886]: E0129 16:23:30.825262 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.826682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.826732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.826743 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.826761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.826772 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.929591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.929642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.929655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.929675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:30 crc kubenswrapper[4886]: I0129 16:23:30.929688 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:30Z","lastTransitionTime":"2026-01-29T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.031811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.031844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.031852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.031867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.031876 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.135236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.135280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.135296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.135321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.135369 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.238252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.238296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.238314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.238350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.238364 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.340841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.340882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.340892 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.340916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.340929 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.443778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.443836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.443853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.443879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.443901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.546965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.547038 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.547047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.547061 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.547069 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.623471 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:07:23.051560798 +0000 UTC Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.649183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.649213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.649224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.649238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.649248 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.752026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.752084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.752102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.752131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.752155 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.855397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.855480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.855504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.855534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.855555 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.958496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.958587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.958606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.958644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:31 crc kubenswrapper[4886]: I0129 16:23:31.958664 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:31Z","lastTransitionTime":"2026-01-29T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.061832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.061910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.061925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.061967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.061983 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.164537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.164625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.164649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.164681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.164703 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.267742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.267809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.267822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.267838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.267850 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.370355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.370410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.370428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.370452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.370471 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.472290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.472321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.472341 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.472354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.472362 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.575045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.575111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.575133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.575160 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.575177 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.614581 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.614702 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.615135 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:32 crc kubenswrapper[4886]: E0129 16:23:32.615072 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:32 crc kubenswrapper[4886]: E0129 16:23:32.615291 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:32 crc kubenswrapper[4886]: E0129 16:23:32.615445 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.615515 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:32 crc kubenswrapper[4886]: E0129 16:23:32.616057 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.616499 4886 scope.go:117] "RemoveContainer" containerID="337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.624045 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:30:14.998836264 +0000 UTC Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.678764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.678830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.678894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.678927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.678951 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.781854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.781922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.781945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.781991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.782027 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.884893 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.884941 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.884953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.884970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.884983 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.987567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.987619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.987643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.987663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:32 crc kubenswrapper[4886]: I0129 16:23:32.987679 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:32Z","lastTransitionTime":"2026-01-29T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.090190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.090227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.090238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.090255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.090266 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.192076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.192128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.192145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.192171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.192187 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.266677 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/2.log" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.272291 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.272758 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.282449 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.294392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.294444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.294456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.294473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.294485 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.299754 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.311551 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.323877 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.343839 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.356453 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.371644 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.385098 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.396614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.396655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.396667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.396685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.396696 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.400148 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.411856 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.426908 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.439846 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.456424 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.466213 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce44468-ba95-4390-a37a-88eb25fc5a52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09015f4cf412b00af42b12364de032e35bb3e11014cac2c07375cb3b2c24a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.478272 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.489785 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.501031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.501129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.501143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.501171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.501184 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.502739 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.516355 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.605060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.605125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.605145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.605175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.605196 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.624435 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:05:44.077952628 +0000 UTC Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.707591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.707657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.707677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.707696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.707711 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.810078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.810130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.810142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.810159 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.810171 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.912515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.912587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.912610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.912642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:33 crc kubenswrapper[4886]: I0129 16:23:33.912665 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:33Z","lastTransitionTime":"2026-01-29T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.015192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.015269 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.015281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.015301 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.015313 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.117662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.117713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.117730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.117752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.117765 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.220237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.220296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.220315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.220374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.220395 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.277809 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/3.log" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.278836 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/2.log" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.282526 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" exitCode=1 Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.282582 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.282624 4886 scope.go:117] "RemoveContainer" containerID="337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.283456 4886 scope.go:117] "RemoveContainer" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" Jan 29 16:23:34 crc kubenswrapper[4886]: E0129 16:23:34.283688 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.305560 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.323000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.323041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.323053 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.323070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.323083 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.327298 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.344557 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.358852 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.372660 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.382789 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.401125 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337c67158d7957062b5ce4ee6477aeea8e6c142251facc3f1f97cfe2d71126d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:08Z\\\",\\\"message\\\":\\\"d openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683928 6773 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 16:23:08.683933 6773 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-cjsnw in node crc\\\\nI0129 16:23:08.683693 6773 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683941 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-cjsnw after 0 failed attempt(s)\\\\nI0129 16:23:08.683945 6773 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683947 6773 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-cjsnw\\\\nI0129 16:23:08.683952 6773 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0129 16:23:08.683957 6773 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0129 16:23:08.683960 6773 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 16:23:08.683771 6773 obj_retry.go:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:33Z\\\",\\\"message\\\":\\\"g]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0129 16:23:33.473162 7035 services_controller.go:360] Finished syncing service scheduler on namespace openshift-kube-scheduler for network=default : 2.954335ms\\\\nI0129 16:23:33.473171 7035 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0129 16:23:33.473251 7035 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0129 16:23:33.473286 7035 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0129 16:23:33.473359 7035 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:23:33.473409 7035 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:23:33.473641 7035 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:23:33.473731 7035 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:23:33.473773 7035 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:23:33.473815 7035 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:23:33.473888 7035 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.412556 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.425053 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.425905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.425983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.425994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.426010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.426020 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.440461 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.452897 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.464388 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.475627 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.493029 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.503898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce44468-ba95-4390-a37a-88eb25fc5a52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09015f4cf412b00af42b12364de032e35bb3e11014cac2c07375cb3b2c24a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.520895 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.529074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.529431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.529623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.529780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.529891 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.535489 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.545708 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.614313 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.614413 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.614313 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:34 crc kubenswrapper[4886]: E0129 16:23:34.614523 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:34 crc kubenswrapper[4886]: E0129 16:23:34.614631 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.614674 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:34 crc kubenswrapper[4886]: E0129 16:23:34.614908 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:34 crc kubenswrapper[4886]: E0129 16:23:34.614996 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.625477 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:39:15.363352672 +0000 UTC Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.632305 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.632378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.632391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.632407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.632419 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.734616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.734673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.734693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.734723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.734745 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.836905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.836975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.836989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.837006 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.837017 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.939708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.939769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.939786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.939811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:34 crc kubenswrapper[4886]: I0129 16:23:34.939833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:34Z","lastTransitionTime":"2026-01-29T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.042706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.042753 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.042768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.042789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.042802 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.145382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.145425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.145440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.145460 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.145476 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.248239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.248294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.248313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.248368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.248388 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.288607 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/3.log" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.293594 4886 scope.go:117] "RemoveContainer" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" Jan 29 16:23:35 crc kubenswrapper[4886]: E0129 16:23:35.293899 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.314881 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.334620 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.348959 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.350608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.350646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.350655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.350671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.350680 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.359942 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.370125 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.382180 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.391640 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce44468-ba95-4390-a37a-88eb25fc5a52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09015f4cf412b00af42b12364de032e35bb3e11014cac2c07375cb3b2c24a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.405234 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.415482 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.426074 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.437523 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.452381 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.452864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.452942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.452953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.452977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.452994 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.468108 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.483843 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.496782 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.505632 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.521275 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:33Z\\\",\\\"message\\\":\\\"g]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0129 16:23:33.473162 7035 services_controller.go:360] Finished syncing service scheduler on namespace openshift-kube-scheduler for network=default : 2.954335ms\\\\nI0129 16:23:33.473171 7035 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0129 16:23:33.473251 7035 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0129 16:23:33.473286 7035 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0129 16:23:33.473359 7035 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:23:33.473409 7035 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:23:33.473641 7035 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:23:33.473731 7035 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:23:33.473773 7035 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:23:33.473815 7035 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:23:33.473888 7035 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.534262 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.555586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.555622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.555636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.555656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.555667 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.626072 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:56:06.19001722 +0000 UTC Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.659025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.659085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.659096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.659116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.659129 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.762288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.762380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.762400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.762424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.762441 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.865502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.865559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.865579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.865605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.865624 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.968875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.968944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.968969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.969000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:35 crc kubenswrapper[4886]: I0129 16:23:35.969026 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:35Z","lastTransitionTime":"2026-01-29T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.072217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.072297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.072319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.072380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.072403 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.176150 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.176232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.176251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.176283 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.176304 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.279498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.279576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.279596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.279698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.279716 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.382997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.383048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.383064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.383087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.383104 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.487197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.487281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.487304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.487370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.487395 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.590116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.590178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.590196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.590222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.590242 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.614586 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.614639 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.614670 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.614609 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:36 crc kubenswrapper[4886]: E0129 16:23:36.614795 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:36 crc kubenswrapper[4886]: E0129 16:23:36.614884 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:36 crc kubenswrapper[4886]: E0129 16:23:36.614973 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:36 crc kubenswrapper[4886]: E0129 16:23:36.615086 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.626835 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:11:51.456228683 +0000 UTC Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.692878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.692916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.692928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.692948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.692961 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.795217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.795275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.795285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.795306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.795318 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.897534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.897601 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.897626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.897656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:36 crc kubenswrapper[4886]: I0129 16:23:36.897679 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:36Z","lastTransitionTime":"2026-01-29T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.000804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.000846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.000856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.000871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.000882 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.102988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.103050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.103068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.103095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.103113 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.205918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.205966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.205977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.205994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.206005 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.308761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.308826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.308848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.308877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.308901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.411699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.411790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.411807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.411831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.411849 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.514789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.514834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.514846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.514863 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.514876 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.616913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.616959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.616990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.617005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.617016 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.627479 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:41:56.29298922 +0000 UTC Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.719764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.719810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.719821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.719878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.719892 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.821779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.821848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.821873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.821901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.821923 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.925450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.925516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.925534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.925557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:37 crc kubenswrapper[4886]: I0129 16:23:37.925577 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:37Z","lastTransitionTime":"2026-01-29T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.029091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.029165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.029182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.029209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.029226 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.132716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.132828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.132844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.132868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.132882 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.234898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.234956 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.234966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.234981 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.235000 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.337263 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.337296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.337309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.337340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.337352 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.440196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.440286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.440308 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.440386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.440409 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.543959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.544026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.544056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.544086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.544107 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.614376 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:38 crc kubenswrapper[4886]: E0129 16:23:38.614633 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.614669 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.614737 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:38 crc kubenswrapper[4886]: E0129 16:23:38.614845 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.614869 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:38 crc kubenswrapper[4886]: E0129 16:23:38.614931 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:38 crc kubenswrapper[4886]: E0129 16:23:38.615055 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.628405 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:15:44.026130657 +0000 UTC Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.632545 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f85c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae17b497-19c0-4f59-93e1-279069e2710a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca897a9b4e4a2b647e34e013a9d20e83e7576e3f2f4a44d30ce36c4efff1a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be4f91cb055bd24c4202ea32e3fa6d36ce5df86a0bdcbbce7059535745a27972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ea62d95d62ba68168a0d4509cc25b24778ca1b0645d4c419d85fbcdb808d4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0626fdf88d6c187e241bb2e50b3d0685699ccb9e589fbf69201a0f1340a6b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7184bcf9aa5ed437a218dc0d04aed760ef345dd29178784b547cab00696af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725b640d6fc679a055db4d31318a2a898b7df3e7fe08f3b351fbcd7355b13e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4485f27a64246d81f03a6e6c3d8e8998fba1fc096b6600f3ade948be95c5115\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqnqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f85c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.646148 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50c05fff-ee54-4ee8-a4f9-93807f7df3db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c5243735574fb8f3b0de74ff95f08f9b3efdf7377f0f56e20b15ef6c859fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841de8a754cdf15452fd36d55173c1017dec05d898f5a51109562c77cbbf76b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92150b6456594fe8576872c07810d1984badff360fdeaa76b4db40179836b5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d0c8c6a678baab6ab138b84d629954767ce24848de3c501570d40aa2d06692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.646648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.646669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.646677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.646690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.646719 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.661626 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.672847 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtrvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bb307e5-0827-4602-95ff-18dec456002b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b75b7c4cc8e7d57133aa4f39dc0702b0b278b66feb5666e1df85fcca88941af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr6xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtrvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.691997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d46238ab-90d4-41b8-b546-6dbff06cf5ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:33Z\\\",\\\"message\\\":\\\"g]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}\\\\nI0129 16:23:33.473162 7035 services_controller.go:360] Finished syncing service scheduler on namespace openshift-kube-scheduler for network=default : 2.954335ms\\\\nI0129 16:23:33.473171 7035 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0129 16:23:33.473251 7035 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0129 16:23:33.473286 7035 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0129 16:23:33.473359 7035 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:23:33.473409 7035 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:23:33.473641 7035 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:23:33.473731 7035 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:23:33.473773 7035 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:23:33.473815 7035 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:23:33.473888 7035 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:23:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8f8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bsnwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.703183 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cjsnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a68a4f-64a7-404e-8f15-1c299e5a4e2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://567f55915d51ae2a6e05bd48b2aaeda69a4862a6961d29768a1ab1bbba9d9b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8xxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cjsnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.720127 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf76816f-86f4-463c-881c-c71bb86df034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb04af3b89786a19c87112b3f0b339d7d3a7f46761cdce65c6c925d6cc8d9847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b864ccb1e9d12c92fba8aedf9121d4a8c78e256ede74db599226abc4c9936e30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c768eda598680be7cbf0a43541e911350d44d78647b36c2480fb3ff5e8b727ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.731571 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d171287b9905d8d039ded73015db050a2d3c9d73f2a184aa0b80636cbd4ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b09b3b70a7ff4290ca89a1d046e44a1c99bf69c01d9e6acc30841e08d05daa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.745110 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dstj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b415d17e-f329-40e7-8a3f-32881cb5347a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:23:20Z\\\",\\\"message\\\":\\\"2026-01-29T16:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9\\\\n2026-01-29T16:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2726de4a-30b3-494a-98bf-84dc414659b9 to /host/opt/cni/bin/\\\\n2026-01-29T16:22:35Z [verbose] multus-daemon started\\\\n2026-01-29T16:22:35Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:23:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxtfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dstj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.748557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.748585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.748597 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.748614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.748626 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.761010 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a420fc-ad8c-41c3-82c3-1e23731e1f55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://689b39c75b6ca5561959fd753c3fe27c3ad2584d5efc8ffa1edd4a0b14b91bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef95d9dbe53c4f2428892b94b669bade8eeae51041691998500d0d2be87a40b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94n7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tpc4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.775245 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75261312-030c-44eb-8d08-07a35f5bcfcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-psdcc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c7wkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.788507 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://392b185e1eef132d64bedf209c2b220a2850c8adb656112caea00273c100f660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.802091 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.817121 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.830607 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6accdb689ed2219fa5779e0b800f4fcc3d03d1ca2f59145e403dcb8fde4394c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.844412 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9db719a3e66a3f40eab4c930306e885139b8c7a354c75a07e1b8d6f2c35f8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h44ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:22:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gx4vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.853464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.853577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.853593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.853647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.853663 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.861892 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9630c976-1bbd-4f14-b4c7-fc0436ca3705\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:22:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:22:22.467747 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:22:22.468034 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:22:22.469377 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1302554679/tls.crt::/tmp/serving-cert-1302554679/tls.key\\\\\\\"\\\\nI0129 16:22:22.929859 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:22:22.933935 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:22:22.933958 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:22:22.933982 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:22:22.933987 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:22:22.978348 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:22:22.978383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:22:22.978396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:22:22.978401 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:22:22.978405 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:22:22.978409 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:22:22.978402 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:22:23.014643 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.875269 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ce44468-ba95-4390-a37a-88eb25fc5a52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:21:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09015f4cf412b00af42b12364de032e35bb3e11014cac2c07375cb3b2c24a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4602a8fe487e855ffe5ee1a385dab13c4a51c6708e80c6ce2dc8de22bf8dc14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:22:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:22:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:21:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.955972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.956031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.956054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.956084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:38 crc kubenswrapper[4886]: I0129 16:23:38.956105 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:38Z","lastTransitionTime":"2026-01-29T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.058701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.058760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.058783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.058810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.058831 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.161117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.161203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.161228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.161264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.161304 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.264685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.264740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.264756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.264779 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.264797 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.368052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.368135 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.368150 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.368178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.368198 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.470233 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.470271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.470280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.470295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.470306 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.573698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.573788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.573813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.573840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.573856 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.629406 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:13:03.532502679 +0000 UTC Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.677059 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.677139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.677175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.677205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.677227 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.780030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.780090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.780108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.780134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.780152 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.883539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.883606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.883627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.883661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.883682 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.987162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.987268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.987292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.987322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:39 crc kubenswrapper[4886]: I0129 16:23:39.987375 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:39Z","lastTransitionTime":"2026-01-29T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.090978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.091019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.091030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.091045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.091057 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.192962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.193032 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.193050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.193075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.193091 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.295392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.295434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.295445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.295461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.295472 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.398456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.398507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.398516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.398534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.398545 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.501754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.501790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.501822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.501841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.501852 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.604248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.604313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.604359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.604386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.604405 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.614667 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.614719 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.614767 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.615165 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:40 crc kubenswrapper[4886]: E0129 16:23:40.615393 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:40 crc kubenswrapper[4886]: E0129 16:23:40.615692 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:40 crc kubenswrapper[4886]: E0129 16:23:40.615873 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:40 crc kubenswrapper[4886]: E0129 16:23:40.616035 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.630107 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:29:04.94035788 +0000 UTC Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.707600 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.707651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.707670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.707694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.707711 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.811054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.811125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.811142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.811182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.811198 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.913657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.913700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.913713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.913736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.913750 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.973964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.974029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.974045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.974071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:40 crc kubenswrapper[4886]: I0129 16:23:40.974093 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:40Z","lastTransitionTime":"2026-01-29T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:40 crc kubenswrapper[4886]: E0129 16:23:40.997191 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.002056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.002141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.002164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.002195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.002220 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: E0129 16:23:41.017687 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.021680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.021752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.021776 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.021804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.021824 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: E0129 16:23:41.044047 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.049068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.049101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.049110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.049124 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.049132 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: E0129 16:23:41.065044 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.070030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.070075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.070087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.070107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.070120 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: E0129 16:23:41.088290 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd8b5dfd-41ae-412b-b205-175b6140aee3\\\",\\\"systemUUID\\\":\\\"f9e02871-746f-4d5e-9d80-7fb23e871a7f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:23:41 crc kubenswrapper[4886]: E0129 16:23:41.088464 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.090011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.090048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.090061 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.090078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.090091 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.193139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.193192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.193207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.193228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.193241 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.295684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.295740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.295757 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.295782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.295799 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.399295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.399358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.399370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.399387 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.399400 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.502452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.502558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.502580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.502608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.502633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.606107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.606209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.606232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.606261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.606283 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.630913 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:27:27.712161025 +0000 UTC Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.713410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.713447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.713456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.713471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.713484 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.816400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.816447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.816457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.816475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.816487 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.919001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.919069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.919092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.919121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:41 crc kubenswrapper[4886]: I0129 16:23:41.919144 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:41Z","lastTransitionTime":"2026-01-29T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.022658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.022732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.022752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.022775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.022795 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.125580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.125674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.125691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.126388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.126458 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.229809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.229847 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.229858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.229875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.229887 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.333129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.333180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.333194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.333215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.333230 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.435773 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.435839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.435863 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.435894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.435917 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.539304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.539392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.539411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.539435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.539452 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.614449 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:42 crc kubenswrapper[4886]: E0129 16:23:42.614645 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.614776 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.614886 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.614975 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:42 crc kubenswrapper[4886]: E0129 16:23:42.615122 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:42 crc kubenswrapper[4886]: E0129 16:23:42.615239 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:42 crc kubenswrapper[4886]: E0129 16:23:42.615492 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.631378 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:26:33.432746096 +0000 UTC Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.642571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.642636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.642653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.642678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.642696 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.746155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.746226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.746246 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.746274 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.746296 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.850286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.850359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.850384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.850407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.850420 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.954164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.954257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.954275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.954300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:42 crc kubenswrapper[4886]: I0129 16:23:42.954317 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:42Z","lastTransitionTime":"2026-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.057948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.058014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.058032 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.058058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.058076 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.160509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.160569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.160591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.160620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.160644 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.264191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.264303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.264353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.264381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.264402 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.366982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.367037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.367054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.367080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.367099 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.470539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.470596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.470614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.470637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.470654 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.574319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.574458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.574480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.574505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.574522 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.631969 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:51:35.343927522 +0000 UTC Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.677151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.677211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.677228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.677254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.677271 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.780210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.780654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.780678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.780713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.780735 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.884450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.884503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.884518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.884541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.884558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.987302 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.987390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.987408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.987433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:43 crc kubenswrapper[4886]: I0129 16:23:43.987453 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:43Z","lastTransitionTime":"2026-01-29T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.090435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.090500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.090517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.090543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.090558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.193727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.193794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.193814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.193839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.193856 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.297074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.297188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.297262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.297288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.297305 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.400492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.400599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.400633 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.400668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.400692 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.504016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.504080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.504104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.504136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.504159 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.614255 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.614676 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:44 crc kubenswrapper[4886]: E0129 16:23:44.614852 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.614883 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.614925 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:44 crc kubenswrapper[4886]: E0129 16:23:44.614958 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:44 crc kubenswrapper[4886]: E0129 16:23:44.615006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:44 crc kubenswrapper[4886]: E0129 16:23:44.615158 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.615644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.615703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.615723 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.615748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.615768 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.632691 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:02:33.709064902 +0000 UTC Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.718875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.718960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.718988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.719019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.719039 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.822746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.822817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.822838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.822868 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.822888 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.926414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.926497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.926521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.926556 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:44 crc kubenswrapper[4886]: I0129 16:23:44.926579 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:44Z","lastTransitionTime":"2026-01-29T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.028968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.029029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.029044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.029073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.029093 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.132415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.132535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.132565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.132598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.132627 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.235544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.235607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.235620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.235640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.235656 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.339562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.339633 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.339651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.339679 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.339698 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.442496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.442587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.442615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.442655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.442679 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.546107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.546155 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.546173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.546199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.546217 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.633117 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:39:21.415193724 +0000 UTC Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.648930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.649009 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.649033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.649072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.649098 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.752281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.752434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.752467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.752498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.752520 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.855756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.855822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.855843 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.855870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.855892 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.959052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.959095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.959110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.959133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:45 crc kubenswrapper[4886]: I0129 16:23:45.959149 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:45Z","lastTransitionTime":"2026-01-29T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.061971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.062003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.062014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.062029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.062037 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.165151 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.165212 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.165229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.165253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.165270 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.267461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.267528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.267550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.267581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.267606 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.370303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.370395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.370419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.370442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.370460 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.474170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.474240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.474263 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.474294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.474315 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.576615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.576685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.576703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.576729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.576747 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.614218 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:46 crc kubenswrapper[4886]: E0129 16:23:46.614436 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.614532 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.614586 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.614563 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:46 crc kubenswrapper[4886]: E0129 16:23:46.614805 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:46 crc kubenswrapper[4886]: E0129 16:23:46.614869 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:46 crc kubenswrapper[4886]: E0129 16:23:46.615490 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.634155 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:48:45.922491255 +0000 UTC Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.635888 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.680021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.680096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.680113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.680594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.680670 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.784209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.784272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.784290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.784318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.784370 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.888210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.888300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.888319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.888374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.888396 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.991466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.991538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.991558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.991585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:46 crc kubenswrapper[4886]: I0129 16:23:46.991602 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:46Z","lastTransitionTime":"2026-01-29T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.060930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:47 crc kubenswrapper[4886]: E0129 16:23:47.061179 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:23:47 crc kubenswrapper[4886]: E0129 16:23:47.061323 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs podName:75261312-030c-44eb-8d08-07a35f5bcfcc nodeName:}" failed. No retries permitted until 2026-01-29 16:24:51.061289592 +0000 UTC m=+173.970008904 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs") pod "network-metrics-daemon-c7wkw" (UID: "75261312-030c-44eb-8d08-07a35f5bcfcc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.094196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.094266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.094290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.094321 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.094398 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.197898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.197984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.198005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.198032 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.198050 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.302223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.302315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.302385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.302419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.302441 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.406056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.406123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.406161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.406191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.406234 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.508952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.509010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.509033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.509061 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.509085 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.611443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.611569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.611603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.611636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.611658 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.635199 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:31:30.256009653 +0000 UTC Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.714200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.714248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.714259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.714277 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.714290 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.817383 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.817437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.817454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.817477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.817493 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.920528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.920586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.920605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.920634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:47 crc kubenswrapper[4886]: I0129 16:23:47.920657 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:47Z","lastTransitionTime":"2026-01-29T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.023840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.023898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.023916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.023943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.023964 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.127120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.127173 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.127191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.127261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.127281 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.230421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.230530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.230551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.230579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.230598 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.333298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.333422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.333453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.333498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.333530 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.436284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.436345 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.436357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.436374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.436386 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.539522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.539596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.539618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.539643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.539661 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.614051 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.614291 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:48 crc kubenswrapper[4886]: E0129 16:23:48.614425 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.614551 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.614108 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:48 crc kubenswrapper[4886]: E0129 16:23:48.614713 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:48 crc kubenswrapper[4886]: E0129 16:23:48.614871 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:48 crc kubenswrapper[4886]: E0129 16:23:48.615012 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.636308 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:14:45.438721244 +0000 UTC Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.642426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.642483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.642500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.642525 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.642543 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.696219 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dtrvj" podStartSLOduration=80.696189037 podStartE2EDuration="1m20.696189037s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.665407208 +0000 UTC m=+111.574126510" watchObservedRunningTime="2026-01-29 16:23:48.696189037 +0000 UTC m=+111.604908349" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.731808 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cjsnw" podStartSLOduration=80.731788356 podStartE2EDuration="1m20.731788356s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.712712965 +0000 UTC m=+111.621432287" watchObservedRunningTime="2026-01-29 16:23:48.731788356 +0000 UTC m=+111.640507638" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.746828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.746943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.746969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.747000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.747021 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.748707 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.748668094 podStartE2EDuration="1m20.748668094s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.732090335 +0000 UTC m=+111.640809647" watchObservedRunningTime="2026-01-29 16:23:48.748668094 +0000 UTC m=+111.657387406" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.787059 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4dstj" podStartSLOduration=80.787035272 podStartE2EDuration="1m20.787035272s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.7661927 +0000 UTC m=+111.674912002" watchObservedRunningTime="2026-01-29 16:23:48.787035272 +0000 UTC m=+111.695754554" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.805742 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tpc4f" podStartSLOduration=79.805712762 podStartE2EDuration="1m19.805712762s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.78765865 +0000 UTC m=+111.696377972" watchObservedRunningTime="2026-01-29 16:23:48.805712762 +0000 UTC m=+111.714432074" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.849382 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.849421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.849433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.849452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.849464 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.855713 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.855691606 podStartE2EDuration="23.855691606s" podCreationTimestamp="2026-01-29 16:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.829429537 +0000 UTC m=+111.738148809" watchObservedRunningTime="2026-01-29 16:23:48.855691606 +0000 UTC m=+111.764410888" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.856133 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.856126938 podStartE2EDuration="2.856126938s" podCreationTimestamp="2026-01-29 16:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.85376598 +0000 UTC m=+111.762485312" watchObservedRunningTime="2026-01-29 16:23:48.856126938 +0000 UTC m=+111.764846220" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.892926 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podStartSLOduration=80.892911811 podStartE2EDuration="1m20.892911811s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.892392946 +0000 UTC m=+111.801112238" watchObservedRunningTime="2026-01-29 16:23:48.892911811 +0000 UTC m=+111.801631093" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.918389 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=84.918319595 podStartE2EDuration="1m24.918319595s" podCreationTimestamp="2026-01-29 16:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.905247798 +0000 UTC m=+111.813967070" watchObservedRunningTime="2026-01-29 16:23:48.918319595 +0000 UTC m=+111.827038907" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.936575 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f85c7" podStartSLOduration=80.936558992 podStartE2EDuration="1m20.936558992s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.935071569 +0000 UTC m=+111.843790851" watchObservedRunningTime="2026-01-29 16:23:48.936558992 +0000 UTC m=+111.845278274" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.951795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.951846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.951858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.951875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:48 crc kubenswrapper[4886]: I0129 16:23:48.951888 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:48Z","lastTransitionTime":"2026-01-29T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.054917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.054978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.054994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.055020 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.055037 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.159452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.159550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.159571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.159591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.159603 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.262631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.262815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.262841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.262875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.262901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.366036 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.366106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.366128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.366152 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.366168 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.469040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.469110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.469129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.469154 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.469556 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.574970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.575016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.575027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.575045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.575055 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.637510 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:52:03.406313113 +0000 UTC Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.678988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.679043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.679058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.679082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.679099 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.782105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.782164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.782183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.782209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.782228 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.885319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.885408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.885427 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.885452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.885469 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.988531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.988634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.988666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.988702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:49 crc kubenswrapper[4886]: I0129 16:23:49.988728 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:49Z","lastTransitionTime":"2026-01-29T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.092833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.092882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.092897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.092921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.092940 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.197002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.197117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.197136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.197159 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.197177 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.299656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.299713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.299728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.299750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.299765 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.402701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.402769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.402792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.402817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.402834 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.506224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.506262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.506270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.506288 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.506298 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.609011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.609062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.609082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.609110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.609129 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.618921 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:50 crc kubenswrapper[4886]: E0129 16:23:50.619097 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.619468 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:50 crc kubenswrapper[4886]: E0129 16:23:50.619696 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.619787 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.619820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:50 crc kubenswrapper[4886]: E0129 16:23:50.619861 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:50 crc kubenswrapper[4886]: E0129 16:23:50.620197 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.620447 4886 scope.go:117] "RemoveContainer" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" Jan 29 16:23:50 crc kubenswrapper[4886]: E0129 16:23:50.620589 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.638369 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:38:11.906493424 +0000 UTC Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.711617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.711651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.711660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.711673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.711682 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.814502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.814572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.814598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.814630 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.814648 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.918538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.918603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.918616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.918640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:50 crc kubenswrapper[4886]: I0129 16:23:50.918656 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:50Z","lastTransitionTime":"2026-01-29T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.021652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.021731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.021744 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.021771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.021786 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:51Z","lastTransitionTime":"2026-01-29T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.125682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.125794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.125815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.125848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.125869 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:51Z","lastTransitionTime":"2026-01-29T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.229656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.229774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.229801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.229834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.229858 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:51Z","lastTransitionTime":"2026-01-29T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.332660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.332714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.332731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.332760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.332778 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:51Z","lastTransitionTime":"2026-01-29T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.340029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.340086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.340100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.340123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.340145 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:23:51Z","lastTransitionTime":"2026-01-29T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.396857 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.396840915 podStartE2EDuration="56.396840915s" podCreationTimestamp="2026-01-29 16:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:48.948665182 +0000 UTC m=+111.857384454" watchObservedRunningTime="2026-01-29 16:23:51.396840915 +0000 UTC m=+114.305560187" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.397774 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft"] Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.398082 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.401563 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.402068 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.402125 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.402225 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.507532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5851bea8-a259-4dc8-a9f2-37961b54c1d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.507581 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5851bea8-a259-4dc8-a9f2-37961b54c1d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.507612 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5851bea8-a259-4dc8-a9f2-37961b54c1d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.507632 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5851bea8-a259-4dc8-a9f2-37961b54c1d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.507779 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5851bea8-a259-4dc8-a9f2-37961b54c1d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609284 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5851bea8-a259-4dc8-a9f2-37961b54c1d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5851bea8-a259-4dc8-a9f2-37961b54c1d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609415 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5851bea8-a259-4dc8-a9f2-37961b54c1d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609453 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5851bea8-a259-4dc8-a9f2-37961b54c1d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609477 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5851bea8-a259-4dc8-a9f2-37961b54c1d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5851bea8-a259-4dc8-a9f2-37961b54c1d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.609570 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5851bea8-a259-4dc8-a9f2-37961b54c1d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.610648 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5851bea8-a259-4dc8-a9f2-37961b54c1d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.616239 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5851bea8-a259-4dc8-a9f2-37961b54c1d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.626822 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5851bea8-a259-4dc8-a9f2-37961b54c1d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4nft\" (UID: \"5851bea8-a259-4dc8-a9f2-37961b54c1d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.639576 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:01:00.935821263 +0000 UTC Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.639646 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.647934 4886 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:23:51 crc kubenswrapper[4886]: I0129 16:23:51.714109 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" Jan 29 16:23:51 crc kubenswrapper[4886]: W0129 16:23:51.731213 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5851bea8_a259_4dc8_a9f2_37961b54c1d5.slice/crio-c5d711bf19c206e1d142d89ad23d38742973c726c8f75fb73c0d698d5be88913 WatchSource:0}: Error finding container c5d711bf19c206e1d142d89ad23d38742973c726c8f75fb73c0d698d5be88913: Status 404 returned error can't find the container with id c5d711bf19c206e1d142d89ad23d38742973c726c8f75fb73c0d698d5be88913 Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.350640 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" event={"ID":"5851bea8-a259-4dc8-a9f2-37961b54c1d5","Type":"ContainerStarted","Data":"92c9be6506e7c6e98e43948e4dd75d88afe7dd3830cbf73c51a2d625b963b230"} Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.351055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" event={"ID":"5851bea8-a259-4dc8-a9f2-37961b54c1d5","Type":"ContainerStarted","Data":"c5d711bf19c206e1d142d89ad23d38742973c726c8f75fb73c0d698d5be88913"} Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.367285 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4nft" podStartSLOduration=84.367263762 podStartE2EDuration="1m24.367263762s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:52.366590723 +0000 UTC m=+115.275310045" watchObservedRunningTime="2026-01-29 16:23:52.367263762 +0000 UTC m=+115.275983054" Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.615057 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.615157 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.615169 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:52 crc kubenswrapper[4886]: E0129 16:23:52.615188 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:52 crc kubenswrapper[4886]: E0129 16:23:52.615297 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:52 crc kubenswrapper[4886]: E0129 16:23:52.615451 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:52 crc kubenswrapper[4886]: I0129 16:23:52.615488 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:52 crc kubenswrapper[4886]: E0129 16:23:52.615582 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:54 crc kubenswrapper[4886]: I0129 16:23:54.614575 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:54 crc kubenswrapper[4886]: I0129 16:23:54.614728 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:54 crc kubenswrapper[4886]: I0129 16:23:54.614778 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:54 crc kubenswrapper[4886]: I0129 16:23:54.615005 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:54 crc kubenswrapper[4886]: E0129 16:23:54.615009 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:54 crc kubenswrapper[4886]: E0129 16:23:54.615164 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:54 crc kubenswrapper[4886]: E0129 16:23:54.615253 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:54 crc kubenswrapper[4886]: E0129 16:23:54.615416 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:56 crc kubenswrapper[4886]: I0129 16:23:56.614721 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:56 crc kubenswrapper[4886]: I0129 16:23:56.614780 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:56 crc kubenswrapper[4886]: I0129 16:23:56.614742 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:56 crc kubenswrapper[4886]: I0129 16:23:56.614943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:56 crc kubenswrapper[4886]: E0129 16:23:56.615211 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:56 crc kubenswrapper[4886]: E0129 16:23:56.615406 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:56 crc kubenswrapper[4886]: E0129 16:23:56.615504 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:56 crc kubenswrapper[4886]: E0129 16:23:56.615669 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:58 crc kubenswrapper[4886]: I0129 16:23:58.614692 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:23:58 crc kubenswrapper[4886]: I0129 16:23:58.614760 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:23:58 crc kubenswrapper[4886]: I0129 16:23:58.614698 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:23:58 crc kubenswrapper[4886]: E0129 16:23:58.616263 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:23:58 crc kubenswrapper[4886]: I0129 16:23:58.616321 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:23:58 crc kubenswrapper[4886]: E0129 16:23:58.616561 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:23:58 crc kubenswrapper[4886]: E0129 16:23:58.616664 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:23:58 crc kubenswrapper[4886]: E0129 16:23:58.616768 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:23:58 crc kubenswrapper[4886]: E0129 16:23:58.653056 4886 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 16:23:58 crc kubenswrapper[4886]: E0129 16:23:58.751485 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:24:00 crc kubenswrapper[4886]: I0129 16:24:00.614640 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:00 crc kubenswrapper[4886]: I0129 16:24:00.614872 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:00 crc kubenswrapper[4886]: E0129 16:24:00.615641 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:00 crc kubenswrapper[4886]: I0129 16:24:00.614817 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:00 crc kubenswrapper[4886]: I0129 16:24:00.614901 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:00 crc kubenswrapper[4886]: E0129 16:24:00.615779 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:00 crc kubenswrapper[4886]: E0129 16:24:00.615939 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:00 crc kubenswrapper[4886]: E0129 16:24:00.616015 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:02 crc kubenswrapper[4886]: I0129 16:24:02.615269 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:02 crc kubenswrapper[4886]: I0129 16:24:02.615544 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:02 crc kubenswrapper[4886]: I0129 16:24:02.615463 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:02 crc kubenswrapper[4886]: I0129 16:24:02.615450 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:02 crc kubenswrapper[4886]: E0129 16:24:02.615816 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:02 crc kubenswrapper[4886]: E0129 16:24:02.615859 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:02 crc kubenswrapper[4886]: E0129 16:24:02.615953 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:02 crc kubenswrapper[4886]: E0129 16:24:02.616021 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:02 crc kubenswrapper[4886]: I0129 16:24:02.616061 4886 scope.go:117] "RemoveContainer" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" Jan 29 16:24:02 crc kubenswrapper[4886]: E0129 16:24:02.617008 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bsnwn_openshift-ovn-kubernetes(d46238ab-90d4-41b8-b546-6dbff06cf5ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" Jan 29 16:24:03 crc kubenswrapper[4886]: E0129 16:24:03.752917 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:24:04 crc kubenswrapper[4886]: I0129 16:24:04.614420 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:04 crc kubenswrapper[4886]: I0129 16:24:04.614509 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:04 crc kubenswrapper[4886]: E0129 16:24:04.614596 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:04 crc kubenswrapper[4886]: I0129 16:24:04.614537 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:04 crc kubenswrapper[4886]: E0129 16:24:04.614698 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:04 crc kubenswrapper[4886]: I0129 16:24:04.614747 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:04 crc kubenswrapper[4886]: E0129 16:24:04.614834 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:04 crc kubenswrapper[4886]: E0129 16:24:04.614927 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.397282 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/1.log" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.398430 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/0.log" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.398508 4886 generic.go:334] "Generic (PLEG): container finished" podID="b415d17e-f329-40e7-8a3f-32881cb5347a" containerID="0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248" exitCode=1 Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.398556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerDied","Data":"0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248"} Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.398668 4886 scope.go:117] "RemoveContainer" containerID="91bc81a70f1b981695edb5dadd59d00a0cb86a456b79637ad3aa6115ca96f7df" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.399150 4886 scope.go:117] "RemoveContainer" containerID="0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248" Jan 29 16:24:06 crc kubenswrapper[4886]: E0129 16:24:06.399701 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4dstj_openshift-multus(b415d17e-f329-40e7-8a3f-32881cb5347a)\"" pod="openshift-multus/multus-4dstj" podUID="b415d17e-f329-40e7-8a3f-32881cb5347a" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.614943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.615001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:06 crc kubenswrapper[4886]: E0129 16:24:06.615110 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.615151 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:06 crc kubenswrapper[4886]: E0129 16:24:06.615268 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:06 crc kubenswrapper[4886]: I0129 16:24:06.615397 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:06 crc kubenswrapper[4886]: E0129 16:24:06.615555 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:06 crc kubenswrapper[4886]: E0129 16:24:06.615677 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:07 crc kubenswrapper[4886]: I0129 16:24:07.404180 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/1.log" Jan 29 16:24:08 crc kubenswrapper[4886]: I0129 16:24:08.614576 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:08 crc kubenswrapper[4886]: I0129 16:24:08.614613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:08 crc kubenswrapper[4886]: E0129 16:24:08.616633 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:08 crc kubenswrapper[4886]: I0129 16:24:08.616646 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:08 crc kubenswrapper[4886]: E0129 16:24:08.616982 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:08 crc kubenswrapper[4886]: E0129 16:24:08.616744 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:08 crc kubenswrapper[4886]: I0129 16:24:08.616665 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:08 crc kubenswrapper[4886]: E0129 16:24:08.617439 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:08 crc kubenswrapper[4886]: E0129 16:24:08.754012 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:24:10 crc kubenswrapper[4886]: I0129 16:24:10.614745 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:10 crc kubenswrapper[4886]: I0129 16:24:10.614851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:10 crc kubenswrapper[4886]: I0129 16:24:10.615441 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:10 crc kubenswrapper[4886]: I0129 16:24:10.614894 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:10 crc kubenswrapper[4886]: E0129 16:24:10.615794 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:10 crc kubenswrapper[4886]: E0129 16:24:10.616175 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:10 crc kubenswrapper[4886]: E0129 16:24:10.616741 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:10 crc kubenswrapper[4886]: E0129 16:24:10.617032 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:12 crc kubenswrapper[4886]: I0129 16:24:12.614204 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:12 crc kubenswrapper[4886]: I0129 16:24:12.614254 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:12 crc kubenswrapper[4886]: I0129 16:24:12.614283 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:12 crc kubenswrapper[4886]: I0129 16:24:12.614366 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:12 crc kubenswrapper[4886]: E0129 16:24:12.614715 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:12 crc kubenswrapper[4886]: E0129 16:24:12.614780 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:12 crc kubenswrapper[4886]: E0129 16:24:12.614873 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:12 crc kubenswrapper[4886]: E0129 16:24:12.615011 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:13 crc kubenswrapper[4886]: I0129 16:24:13.616405 4886 scope.go:117] "RemoveContainer" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" Jan 29 16:24:13 crc kubenswrapper[4886]: E0129 16:24:13.755400 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.426533 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/3.log" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.428793 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerStarted","Data":"f3e810b92c533dbff0b37232e3b59d6146e02214a9506edd851862a6737312a5"} Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.429216 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.458825 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podStartSLOduration=106.458807948 podStartE2EDuration="1m46.458807948s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:14.456337935 +0000 UTC m=+137.365057207" watchObservedRunningTime="2026-01-29 16:24:14.458807948 +0000 UTC m=+137.367527220" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.467927 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c7wkw"] Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.468022 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:14 crc kubenswrapper[4886]: E0129 16:24:14.468106 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.614259 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:14 crc kubenswrapper[4886]: E0129 16:24:14.614439 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.614446 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:14 crc kubenswrapper[4886]: I0129 16:24:14.614536 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:14 crc kubenswrapper[4886]: E0129 16:24:14.614685 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:14 crc kubenswrapper[4886]: E0129 16:24:14.614828 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:16 crc kubenswrapper[4886]: I0129 16:24:16.614815 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:16 crc kubenswrapper[4886]: I0129 16:24:16.614910 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:16 crc kubenswrapper[4886]: E0129 16:24:16.615498 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:16 crc kubenswrapper[4886]: I0129 16:24:16.614910 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:16 crc kubenswrapper[4886]: E0129 16:24:16.615634 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:16 crc kubenswrapper[4886]: I0129 16:24:16.614965 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:16 crc kubenswrapper[4886]: E0129 16:24:16.615281 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:16 crc kubenswrapper[4886]: E0129 16:24:16.615737 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:18 crc kubenswrapper[4886]: I0129 16:24:18.614797 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:18 crc kubenswrapper[4886]: I0129 16:24:18.614840 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:18 crc kubenswrapper[4886]: I0129 16:24:18.614846 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:18 crc kubenswrapper[4886]: E0129 16:24:18.617509 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:18 crc kubenswrapper[4886]: I0129 16:24:18.617597 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:18 crc kubenswrapper[4886]: E0129 16:24:18.617743 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:18 crc kubenswrapper[4886]: E0129 16:24:18.617837 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:18 crc kubenswrapper[4886]: E0129 16:24:18.617994 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:18 crc kubenswrapper[4886]: E0129 16:24:18.756611 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:24:20 crc kubenswrapper[4886]: I0129 16:24:20.614495 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:20 crc kubenswrapper[4886]: I0129 16:24:20.614492 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:20 crc kubenswrapper[4886]: I0129 16:24:20.614545 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:20 crc kubenswrapper[4886]: I0129 16:24:20.614579 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:20 crc kubenswrapper[4886]: E0129 16:24:20.614675 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:20 crc kubenswrapper[4886]: I0129 16:24:20.614833 4886 scope.go:117] "RemoveContainer" containerID="0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248" Jan 29 16:24:20 crc kubenswrapper[4886]: E0129 16:24:20.615071 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:20 crc kubenswrapper[4886]: E0129 16:24:20.615210 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:20 crc kubenswrapper[4886]: E0129 16:24:20.615403 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:21 crc kubenswrapper[4886]: I0129 16:24:21.453698 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/1.log" Jan 29 16:24:21 crc kubenswrapper[4886]: I0129 16:24:21.454026 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerStarted","Data":"e74f1c8b65fe500a145e8a234d995565d439027c89c5aa1da47c13b626c7d606"} Jan 29 16:24:22 crc kubenswrapper[4886]: I0129 16:24:22.614588 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:22 crc kubenswrapper[4886]: I0129 16:24:22.614640 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:22 crc kubenswrapper[4886]: E0129 16:24:22.614813 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:24:22 crc kubenswrapper[4886]: I0129 16:24:22.614872 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:22 crc kubenswrapper[4886]: I0129 16:24:22.614888 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:22 crc kubenswrapper[4886]: E0129 16:24:22.615035 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:24:22 crc kubenswrapper[4886]: E0129 16:24:22.615220 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:24:22 crc kubenswrapper[4886]: E0129 16:24:22.615370 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c7wkw" podUID="75261312-030c-44eb-8d08-07a35f5bcfcc" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.613982 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.614043 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.614064 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.614154 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.616873 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.617619 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.618007 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.618055 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.618541 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:24:24 crc kubenswrapper[4886]: I0129 16:24:24.621730 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:24:29 crc kubenswrapper[4886]: I0129 16:24:29.696247 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.557240 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.557400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.557436 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.557457 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.557487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:30 crc kubenswrapper[4886]: E0129 16:24:30.557722 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:26:32.557678287 +0000 UTC m=+275.466397599 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.558805 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.562923 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.563142 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.563963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.653570 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.664778 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:30 crc kubenswrapper[4886]: I0129 16:24:30.672618 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:24:30 crc kubenswrapper[4886]: W0129 16:24:30.922762 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-62d008f596f0e09026087d250f276874d53973ec1b9f2bc09b0cd926113ffae6 WatchSource:0}: Error finding container 62d008f596f0e09026087d250f276874d53973ec1b9f2bc09b0cd926113ffae6: Status 404 returned error can't find the container with id 62d008f596f0e09026087d250f276874d53973ec1b9f2bc09b0cd926113ffae6 Jan 29 16:24:31 crc kubenswrapper[4886]: W0129 16:24:31.168044 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f9a65aa2021d5092dd621aa8182cb15c60ab11806cacb5cbac752b9d8c7595f4 WatchSource:0}: Error finding container f9a65aa2021d5092dd621aa8182cb15c60ab11806cacb5cbac752b9d8c7595f4: Status 404 returned error can't find the container with id f9a65aa2021d5092dd621aa8182cb15c60ab11806cacb5cbac752b9d8c7595f4 Jan 29 16:24:31 crc kubenswrapper[4886]: W0129 16:24:31.179572 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-cc722e1fc2b178cbe62dacc940a1cab6bef2ef0d588ae46e5e15f868a3346be3 WatchSource:0}: Error finding container cc722e1fc2b178cbe62dacc940a1cab6bef2ef0d588ae46e5e15f868a3346be3: Status 404 returned error can't find the container with id cc722e1fc2b178cbe62dacc940a1cab6bef2ef0d588ae46e5e15f868a3346be3 Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.496536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2f404bfb13e6a7706d346652c15f870c4815b4058f4c93b5c197a836fdce7319"} Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.496685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cc722e1fc2b178cbe62dacc940a1cab6bef2ef0d588ae46e5e15f868a3346be3"} Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.496968 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.499887 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"76f3bbdf064d7dbc28229e15b8f1a89d3a7e90613f1b3004355f588c706acdf9"} Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.499965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f9a65aa2021d5092dd621aa8182cb15c60ab11806cacb5cbac752b9d8c7595f4"} Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.502958 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"65bffb592245765658a8c2e106592b9be1ba3471e94ff254f66062a470732b5d"} Jan 29 16:24:31 crc kubenswrapper[4886]: I0129 16:24:31.503035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"62d008f596f0e09026087d250f276874d53973ec1b9f2bc09b0cd926113ffae6"} Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.280520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.326076 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.326553 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-frztl"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.326846 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.327259 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.329675 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4rg2h"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.330487 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.331870 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.332524 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.332906 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5s4w"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.333095 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.333119 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.334378 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bj8hg"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.334508 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.334785 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mpttg"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.335248 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.335269 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.344634 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.348769 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.349150 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.349751 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.349757 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.349917 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350151 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350202 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350362 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350472 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350571 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350614 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350873 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.350878 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.351007 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.351164 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.351846 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.352656 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.353042 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.353705 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.356698 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.357557 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.357699 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.358145 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.358275 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.358636 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.360131 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.360388 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.362749 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.363460 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.367796 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368161 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368428 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368496 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368526 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368548 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368663 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368705 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368784 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368813 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368857 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368885 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.368430 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.369097 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.369248 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.369372 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.369756 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.370002 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.370764 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.371416 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wczvq"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.371859 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.375281 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.375510 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.389233 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.389585 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.389655 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.389766 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.389863 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.389994 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.390084 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.397248 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.397424 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.397554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.407879 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.409234 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hvwx7"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.409716 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.409955 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.410406 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.410636 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.410827 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.411003 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.411171 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.411979 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.412538 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.412834 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.413066 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.413904 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.414290 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.414527 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.417427 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zjtrn"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.417805 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.418198 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x62jn"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.418722 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.419210 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.419504 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.419949 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.422455 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44l86"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.422743 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.422964 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fgmg6"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.423304 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.423518 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.423544 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.430831 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.431052 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.432379 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.432815 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.433031 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.433130 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.433449 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.435087 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.435590 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.435669 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.435736 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.436188 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.438178 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.451788 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.452644 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.453419 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.453684 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.453793 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454258 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454302 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454570 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454753 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454870 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.455404 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.455504 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.455621 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454306 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.455857 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.455878 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.456156 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.454603 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.458105 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.458411 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.459122 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.465481 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.481232 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.481245 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.481513 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.482375 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mpttg"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.482404 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bj8hg"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.482479 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.486948 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.487012 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.487029 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.487897 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.488845 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zrg4t"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.489455 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.489856 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.490982 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.491019 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.493504 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.494170 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.495775 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.496239 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.496541 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.496875 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.497356 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.497537 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f2q4h"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.498051 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.499516 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.500314 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.500703 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.500775 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.500935 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.501416 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.503195 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.503670 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.504051 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.504276 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.505079 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8qsrq"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.506035 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.506225 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.506822 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.509014 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.509455 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.509936 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.512868 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bm4"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.513298 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.514494 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515198 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3510e180-be29-469c-bfa0-b06702f80c93-images\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/324f040b-716b-41ff-80af-acd92d47a95d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515632 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3790628-7588-42bf-ace6-04e2a0f1a09a-config\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515650 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/793b5b1f-d882-4f05-be9f-7515433a91e7-metrics-tls\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515671 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515711 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/793b5b1f-d882-4f05-be9f-7515433a91e7-trusted-ca\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515769 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-audit\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515788 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515807 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d42a606f-2b2f-4782-ba98-15d8662eb3a9-metrics-tls\") pod \"dns-operator-744455d44c-x62jn\" (UID: \"d42a606f-2b2f-4782-ba98-15d8662eb3a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515856 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515887 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbwd\" (UniqueName: \"kubernetes.io/projected/204067d9-20d8-440f-88f4-57b6ce3a0ef1-kube-api-access-9kbwd\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515907 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-encryption-config\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515929 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-config\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515949 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/204067d9-20d8-440f-88f4-57b6ce3a0ef1-serving-cert\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515967 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-service-ca-bundle\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.515986 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-encryption-config\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516007 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516030 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-client\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516049 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516085 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-config\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516101 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/324f040b-716b-41ff-80af-acd92d47a95d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516118 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3790628-7588-42bf-ace6-04e2a0f1a09a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516135 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-oauth-serving-cert\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516152 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsclv\" (UniqueName: \"kubernetes.io/projected/d677ab93-2fac-4612-8558-8ffc559d5247-kube-api-access-jsclv\") pod \"downloads-7954f5f757-wczvq\" (UID: \"d677ab93-2fac-4612-8558-8ffc559d5247\") " pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516170 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516226 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9f6m\" (UniqueName: \"kubernetes.io/projected/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-kube-api-access-w9f6m\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516263 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e017d9d-e6ec-4917-b888-987be0ce0523-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb068b0a-4b6b-48b7-bae4-ab193394f299-serving-cert\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516344 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-trusted-ca-bundle\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516362 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9l5\" (UniqueName: \"kubernetes.io/projected/a8ec6d15-494f-427c-b532-adebe8e9d910-kube-api-access-5v9l5\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516390 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-serving-cert\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e017d9d-e6ec-4917-b888-987be0ce0523-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516440 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-service-ca\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-config\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-etcd-client\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d35f633-a6e9-4890-8c3f-ec87291ac03f-node-pullsecrets\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516523 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6fxw\" (UniqueName: \"kubernetes.io/projected/9e017d9d-e6ec-4917-b888-987be0ce0523-kube-api-access-f6fxw\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516538 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3510e180-be29-469c-bfa0-b06702f80c93-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516554 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516570 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-policies\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516585 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-service-ca\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ec6d15-494f-427c-b532-adebe8e9d910-serving-cert\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516628 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516658 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-serving-cert\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.516723 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3510e180-be29-469c-bfa0-b06702f80c93-config\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8pr\" (UniqueName: \"kubernetes.io/projected/eb068b0a-4b6b-48b7-bae4-ab193394f299-kube-api-access-6h8pr\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517073 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-config\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517095 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/793b5b1f-d882-4f05-be9f-7515433a91e7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517115 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-serving-cert\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517130 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-audit-dir\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517147 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-serving-cert\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517235 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79844037-42b5-456b-acbd-45fc61f251d9-serving-cert\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517289 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-ca\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517311 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-76mxm"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517313 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-image-import-ca\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517440 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5f2j\" (UniqueName: \"kubernetes.io/projected/79844037-42b5-456b-acbd-45fc61f251d9-kube-api-access-q5f2j\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517536 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ded0e679-6bf1-4d45-a59f-2c1b89bed863-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pgq49\" (UID: \"ded0e679-6bf1-4d45-a59f-2c1b89bed863\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517601 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p42km\" (UniqueName: \"kubernetes.io/projected/ded0e679-6bf1-4d45-a59f-2c1b89bed863-kube-api-access-p42km\") pod \"cluster-samples-operator-665b6dd947-pgq49\" (UID: \"ded0e679-6bf1-4d45-a59f-2c1b89bed863\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517643 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517671 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-client-ca\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517702 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517734 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54bpz\" (UniqueName: \"kubernetes.io/projected/d42a606f-2b2f-4782-ba98-15d8662eb3a9-kube-api-access-54bpz\") pod \"dns-operator-744455d44c-x62jn\" (UID: \"d42a606f-2b2f-4782-ba98-15d8662eb3a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517833 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jkf\" (UniqueName: \"kubernetes.io/projected/4d5118e4-db44-4e09-a04d-2036e251936b-kube-api-access-44jkf\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517861 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d35f633-a6e9-4890-8c3f-ec87291ac03f-audit-dir\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfffh\" (UniqueName: \"kubernetes.io/projected/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-kube-api-access-rfffh\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517901 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.517902 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5118e4-db44-4e09-a04d-2036e251936b-serving-cert\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518039 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8672426-860f-4c9e-a776-094b8df786a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518065 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8672426-860f-4c9e-a776-094b8df786a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht79\" (UniqueName: \"kubernetes.io/projected/3510e180-be29-469c-bfa0-b06702f80c93-kube-api-access-2ht79\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518106 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgl2\" (UniqueName: \"kubernetes.io/projected/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-kube-api-access-zhgl2\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518124 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq28t\" (UniqueName: \"kubernetes.io/projected/324f040b-716b-41ff-80af-acd92d47a95d-kube-api-access-zq28t\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518142 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-config\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518161 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518179 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjmr\" (UniqueName: \"kubernetes.io/projected/b947565b-6a14-4bbd-881e-e82c33ca3a3b-kube-api-access-hqjmr\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518230 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-oauth-config\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518247 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/324f040b-716b-41ff-80af-acd92d47a95d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518267 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbcn\" (UniqueName: \"kubernetes.io/projected/793b5b1f-d882-4f05-be9f-7515433a91e7-kube-api-access-5sbcn\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518285 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8672426-860f-4c9e-a776-094b8df786a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518301 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-etcd-client\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518317 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbrl\" (UniqueName: \"kubernetes.io/projected/1d35f633-a6e9-4890-8c3f-ec87291ac03f-kube-api-access-7hbrl\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-dir\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518391 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-audit-policies\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518445 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-client-ca\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518468 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79844037-42b5-456b-acbd-45fc61f251d9-trusted-ca\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518531 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79844037-42b5-456b-acbd-45fc61f251d9-config\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.518562 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wczvq"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.519234 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-config\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.519264 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3790628-7588-42bf-ace6-04e2a0f1a09a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.521255 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4rg2h"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.525819 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.527684 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.529689 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.535084 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-frztl"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.538956 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fgmg6"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.544448 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.547635 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x62jn"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.549106 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.553129 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5s4w"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.556366 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.556412 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.556913 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.558079 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.559174 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.561141 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.562698 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.563681 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hvwx7"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.565202 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.566135 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bm4"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.567916 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.569436 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.572475 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44l86"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.573112 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.574457 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.575475 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.576610 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8qsrq"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.578335 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.581728 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.583830 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jfbvx"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.585148 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.586283 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f2q4h"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.587761 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2c5f9"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.589261 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.589804 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.590131 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zjtrn"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.591966 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.593686 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-76mxm"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.595218 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jfbvx"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.597270 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.598463 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2c5f9"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.600202 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dddt4"] Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.600956 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.621603 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623827 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8672426-860f-4c9e-a776-094b8df786a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623863 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8672426-860f-4c9e-a776-094b8df786a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623891 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5118e4-db44-4e09-a04d-2036e251936b-serving-cert\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623909 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ht79\" (UniqueName: \"kubernetes.io/projected/3510e180-be29-469c-bfa0-b06702f80c93-kube-api-access-2ht79\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhgl2\" (UniqueName: \"kubernetes.io/projected/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-kube-api-access-zhgl2\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623968 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq28t\" (UniqueName: \"kubernetes.io/projected/324f040b-716b-41ff-80af-acd92d47a95d-kube-api-access-zq28t\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.623986 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjmr\" (UniqueName: \"kubernetes.io/projected/b947565b-6a14-4bbd-881e-e82c33ca3a3b-kube-api-access-hqjmr\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624030 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-config\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-oauth-config\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624092 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/324f040b-716b-41ff-80af-acd92d47a95d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbcn\" (UniqueName: \"kubernetes.io/projected/793b5b1f-d882-4f05-be9f-7515433a91e7-kube-api-access-5sbcn\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8672426-860f-4c9e-a776-094b8df786a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-audit-policies\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624156 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-client-ca\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624175 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-etcd-client\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624191 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbrl\" (UniqueName: \"kubernetes.io/projected/1d35f633-a6e9-4890-8c3f-ec87291ac03f-kube-api-access-7hbrl\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624209 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-dir\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624228 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79844037-42b5-456b-acbd-45fc61f251d9-trusted-ca\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79844037-42b5-456b-acbd-45fc61f251d9-config\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624263 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-config\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3790628-7588-42bf-ace6-04e2a0f1a09a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3510e180-be29-469c-bfa0-b06702f80c93-images\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624395 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/324f040b-716b-41ff-80af-acd92d47a95d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.624416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3790628-7588-42bf-ace6-04e2a0f1a09a-config\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.625182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-config\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.625809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-client-ca\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.625824 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79844037-42b5-456b-acbd-45fc61f251d9-trusted-ca\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.625878 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-audit-policies\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626116 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-dir\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/793b5b1f-d882-4f05-be9f-7515433a91e7-metrics-tls\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-audit\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626315 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/793b5b1f-d882-4f05-be9f-7515433a91e7-trusted-ca\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626387 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d42a606f-2b2f-4782-ba98-15d8662eb3a9-metrics-tls\") pod \"dns-operator-744455d44c-x62jn\" (UID: \"d42a606f-2b2f-4782-ba98-15d8662eb3a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626423 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626456 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/204067d9-20d8-440f-88f4-57b6ce3a0ef1-serving-cert\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kbwd\" (UniqueName: \"kubernetes.io/projected/204067d9-20d8-440f-88f4-57b6ce3a0ef1-kube-api-access-9kbwd\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-encryption-config\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626563 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79844037-42b5-456b-acbd-45fc61f251d9-config\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626580 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-config\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626605 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626617 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-service-ca-bundle\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626717 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-encryption-config\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626766 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-client\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626842 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-config\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/324f040b-716b-41ff-80af-acd92d47a95d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626955 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.626994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9f6m\" (UniqueName: \"kubernetes.io/projected/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-kube-api-access-w9f6m\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3790628-7588-42bf-ace6-04e2a0f1a09a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-oauth-serving-cert\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627104 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsclv\" (UniqueName: \"kubernetes.io/projected/d677ab93-2fac-4612-8558-8ffc559d5247-kube-api-access-jsclv\") pod \"downloads-7954f5f757-wczvq\" (UID: \"d677ab93-2fac-4612-8558-8ffc559d5247\") " pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627148 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627187 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e017d9d-e6ec-4917-b888-987be0ce0523-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627220 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-service-ca-bundle\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627223 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9l5\" (UniqueName: \"kubernetes.io/projected/a8ec6d15-494f-427c-b532-adebe8e9d910-kube-api-access-5v9l5\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb068b0a-4b6b-48b7-bae4-ab193394f299-serving-cert\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627282 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-trusted-ca-bundle\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627300 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e017d9d-e6ec-4917-b888-987be0ce0523-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627317 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-service-ca\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627354 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-serving-cert\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627372 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d35f633-a6e9-4890-8c3f-ec87291ac03f-node-pullsecrets\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627388 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-config\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627408 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-etcd-client\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627443 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6fxw\" (UniqueName: \"kubernetes.io/projected/9e017d9d-e6ec-4917-b888-987be0ce0523-kube-api-access-f6fxw\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627462 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3510e180-be29-469c-bfa0-b06702f80c93-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-service-ca\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627497 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627513 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-policies\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627528 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-serving-cert\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627545 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3510e180-be29-469c-bfa0-b06702f80c93-config\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627576 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ec6d15-494f-427c-b532-adebe8e9d910-serving-cert\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627582 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-config\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627603 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627632 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8pr\" (UniqueName: \"kubernetes.io/projected/eb068b0a-4b6b-48b7-bae4-ab193394f299-kube-api-access-6h8pr\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-config\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627667 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/793b5b1f-d882-4f05-be9f-7515433a91e7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627684 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79844037-42b5-456b-acbd-45fc61f251d9-serving-cert\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627699 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-ca\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-serving-cert\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627745 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-audit-dir\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627767 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-serving-cert\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627787 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-image-import-ca\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627826 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5f2j\" (UniqueName: \"kubernetes.io/projected/79844037-42b5-456b-acbd-45fc61f251d9-kube-api-access-q5f2j\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627849 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p42km\" (UniqueName: \"kubernetes.io/projected/ded0e679-6bf1-4d45-a59f-2c1b89bed863-kube-api-access-p42km\") pod \"cluster-samples-operator-665b6dd947-pgq49\" (UID: \"ded0e679-6bf1-4d45-a59f-2c1b89bed863\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627889 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ded0e679-6bf1-4d45-a59f-2c1b89bed863-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pgq49\" (UID: \"ded0e679-6bf1-4d45-a59f-2c1b89bed863\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627907 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-client-ca\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627942 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627960 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d35f633-a6e9-4890-8c3f-ec87291ac03f-audit-dir\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627978 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfffh\" (UniqueName: \"kubernetes.io/projected/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-kube-api-access-rfffh\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627997 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54bpz\" (UniqueName: \"kubernetes.io/projected/d42a606f-2b2f-4782-ba98-15d8662eb3a9-kube-api-access-54bpz\") pod \"dns-operator-744455d44c-x62jn\" (UID: \"d42a606f-2b2f-4782-ba98-15d8662eb3a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.628014 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jkf\" (UniqueName: \"kubernetes.io/projected/4d5118e4-db44-4e09-a04d-2036e251936b-kube-api-access-44jkf\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.628044 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3510e180-be29-469c-bfa0-b06702f80c93-images\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.628439 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-service-ca\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.628883 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.629258 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-etcd-client\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.629413 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5118e4-db44-4e09-a04d-2036e251936b-serving-cert\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.629468 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/324f040b-716b-41ff-80af-acd92d47a95d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.629539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.630533 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-audit\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.630647 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.630700 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.630815 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-oauth-config\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.631412 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/793b5b1f-d882-4f05-be9f-7515433a91e7-trusted-ca\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.631957 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8672426-860f-4c9e-a776-094b8df786a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.632003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-ca\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.632369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-encryption-config\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.632732 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-config\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.633118 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-serving-cert\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.633312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.633576 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-config\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.633705 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79844037-42b5-456b-acbd-45fc61f251d9-serving-cert\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.633766 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d35f633-a6e9-4890-8c3f-ec87291ac03f-node-pullsecrets\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.634085 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.634232 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-config\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.634830 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.635125 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.635759 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-serving-cert\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.635810 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-audit-dir\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.636582 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.636749 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/793b5b1f-d882-4f05-be9f-7515433a91e7-metrics-tls\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.627280 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8672426-860f-4c9e-a776-094b8df786a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.637163 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-etcd-client\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.637564 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/324f040b-716b-41ff-80af-acd92d47a95d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.638719 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-serving-cert\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.639136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/204067d9-20d8-440f-88f4-57b6ce3a0ef1-serving-cert\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.639207 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d42a606f-2b2f-4782-ba98-15d8662eb3a9-metrics-tls\") pod \"dns-operator-744455d44c-x62jn\" (UID: \"d42a606f-2b2f-4782-ba98-15d8662eb3a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.639897 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3510e180-be29-469c-bfa0-b06702f80c93-config\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.640085 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-client\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.640203 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-client-ca\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.640280 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-policies\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.640637 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.640722 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8ec6d15-494f-427c-b532-adebe8e9d910-etcd-service-ca\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641296 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e017d9d-e6ec-4917-b888-987be0ce0523-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641514 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d35f633-a6e9-4890-8c3f-ec87291ac03f-audit-dir\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641617 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-oauth-serving-cert\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641923 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-image-import-ca\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.641939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d35f633-a6e9-4890-8c3f-ec87291ac03f-encryption-config\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.640822 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d35f633-a6e9-4890-8c3f-ec87291ac03f-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.642145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204067d9-20d8-440f-88f4-57b6ce3a0ef1-config\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.642306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.642489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-serving-cert\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.642638 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.643173 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ec6d15-494f-427c-b532-adebe8e9d910-serving-cert\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.643306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-trusted-ca-bundle\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.644507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.644559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.645193 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ded0e679-6bf1-4d45-a59f-2c1b89bed863-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pgq49\" (UID: \"ded0e679-6bf1-4d45-a59f-2c1b89bed863\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.645197 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3510e180-be29-469c-bfa0-b06702f80c93-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.645685 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.650693 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.670233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb068b0a-4b6b-48b7-bae4-ab193394f299-serving-cert\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.670611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.676487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e017d9d-e6ec-4917-b888-987be0ce0523-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.689571 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.702419 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3790628-7588-42bf-ace6-04e2a0f1a09a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.710609 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.729451 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.739699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3790628-7588-42bf-ace6-04e2a0f1a09a-config\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.754225 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.771765 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.790058 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.849541 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.870572 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.890719 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.909392 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.930234 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.950354 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.970365 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:24:32 crc kubenswrapper[4886]: I0129 16:24:32.990410 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.009663 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.030565 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.050158 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.069063 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.090750 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.109917 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.130455 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.150471 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.169689 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.190550 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.209673 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.229837 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.250648 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.271611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.290427 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.309819 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.329665 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.349767 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.369020 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.389842 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.409743 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.431056 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.449663 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.470136 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.490088 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.507519 4886 request.go:700] Waited for 1.009263684s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.509794 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.530434 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.550531 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.569781 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.590136 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.611021 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.630415 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.649024 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.669458 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.689315 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.709734 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.729860 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.750430 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.770491 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.789581 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.809426 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.829141 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.850162 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.870127 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.891297 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.910271 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.932024 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.950708 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.970799 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:24:33 crc kubenswrapper[4886]: I0129 16:24:33.990674 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.010488 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.039820 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.051725 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.071029 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.091109 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.110922 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.130054 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.149696 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.170369 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.189842 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.210764 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.230525 4886 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.249973 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.269916 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.290258 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.309866 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.330256 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.350025 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.369773 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.413153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ht79\" (UniqueName: \"kubernetes.io/projected/3510e180-be29-469c-bfa0-b06702f80c93-kube-api-access-2ht79\") pod \"machine-api-operator-5694c8668f-fgmg6\" (UID: \"3510e180-be29-469c-bfa0-b06702f80c93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.431078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/324f040b-716b-41ff-80af-acd92d47a95d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.449508 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhgl2\" (UniqueName: \"kubernetes.io/projected/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-kube-api-access-zhgl2\") pod \"console-f9d7485db-frztl\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.469854 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjmr\" (UniqueName: \"kubernetes.io/projected/b947565b-6a14-4bbd-881e-e82c33ca3a3b-kube-api-access-hqjmr\") pod \"oauth-openshift-558db77b4-mpttg\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.500509 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8672426-860f-4c9e-a776-094b8df786a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ghfg9\" (UID: \"b8672426-860f-4c9e-a776-094b8df786a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.507120 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbcn\" (UniqueName: \"kubernetes.io/projected/793b5b1f-d882-4f05-be9f-7515433a91e7-kube-api-access-5sbcn\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.508623 4886 request.go:700] Waited for 1.882462012s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.525100 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbrl\" (UniqueName: \"kubernetes.io/projected/1d35f633-a6e9-4890-8c3f-ec87291ac03f-kube-api-access-7hbrl\") pod \"apiserver-76f77b778f-v5s4w\" (UID: \"1d35f633-a6e9-4890-8c3f-ec87291ac03f\") " pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.549231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq28t\" (UniqueName: \"kubernetes.io/projected/324f040b-716b-41ff-80af-acd92d47a95d-kube-api-access-zq28t\") pod \"cluster-image-registry-operator-dc59b4c8b-5l855\" (UID: \"324f040b-716b-41ff-80af-acd92d47a95d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.566568 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.569952 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9l5\" (UniqueName: \"kubernetes.io/projected/a8ec6d15-494f-427c-b532-adebe8e9d910-kube-api-access-5v9l5\") pod \"etcd-operator-b45778765-hvwx7\" (UID: \"a8ec6d15-494f-427c-b532-adebe8e9d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.585211 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jkf\" (UniqueName: \"kubernetes.io/projected/4d5118e4-db44-4e09-a04d-2036e251936b-kube-api-access-44jkf\") pod \"controller-manager-879f6c89f-4rg2h\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.605367 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.619674 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p42km\" (UniqueName: \"kubernetes.io/projected/ded0e679-6bf1-4d45-a59f-2c1b89bed863-kube-api-access-p42km\") pod \"cluster-samples-operator-665b6dd947-pgq49\" (UID: \"ded0e679-6bf1-4d45-a59f-2c1b89bed863\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.620075 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.638977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.643582 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsclv\" (UniqueName: \"kubernetes.io/projected/d677ab93-2fac-4612-8558-8ffc559d5247-kube-api-access-jsclv\") pod \"downloads-7954f5f757-wczvq\" (UID: \"d677ab93-2fac-4612-8558-8ffc559d5247\") " pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.644993 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5f2j\" (UniqueName: \"kubernetes.io/projected/79844037-42b5-456b-acbd-45fc61f251d9-kube-api-access-q5f2j\") pod \"console-operator-58897d9998-bj8hg\" (UID: \"79844037-42b5-456b-acbd-45fc61f251d9\") " pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.646478 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.672886 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6fxw\" (UniqueName: \"kubernetes.io/projected/9e017d9d-e6ec-4917-b888-987be0ce0523-kube-api-access-f6fxw\") pod \"openshift-apiserver-operator-796bbdcf4f-8wjnz\" (UID: \"9e017d9d-e6ec-4917-b888-987be0ce0523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.693962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8pr\" (UniqueName: \"kubernetes.io/projected/eb068b0a-4b6b-48b7-bae4-ab193394f299-kube-api-access-6h8pr\") pod \"route-controller-manager-6576b87f9c-h57m9\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.701766 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.708734 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfffh\" (UniqueName: \"kubernetes.io/projected/bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc-kube-api-access-rfffh\") pod \"openshift-config-operator-7777fb866f-bxbsl\" (UID: \"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.730694 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54bpz\" (UniqueName: \"kubernetes.io/projected/d42a606f-2b2f-4782-ba98-15d8662eb3a9-kube-api-access-54bpz\") pod \"dns-operator-744455d44c-x62jn\" (UID: \"d42a606f-2b2f-4782-ba98-15d8662eb3a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.746230 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.751782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9f6m\" (UniqueName: \"kubernetes.io/projected/7c5463e2-9818-4a5e-8dd0-36cd4c78d749-kube-api-access-w9f6m\") pod \"apiserver-7bbb656c7d-jwmkt\" (UID: \"7c5463e2-9818-4a5e-8dd0-36cd4c78d749\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.755146 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5s4w"] Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.755458 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.766548 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.767835 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3790628-7588-42bf-ace6-04e2a0f1a09a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-spj4x\" (UID: \"e3790628-7588-42bf-ace6-04e2a0f1a09a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.785015 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kbwd\" (UniqueName: \"kubernetes.io/projected/204067d9-20d8-440f-88f4-57b6ce3a0ef1-kube-api-access-9kbwd\") pod \"authentication-operator-69f744f599-zjtrn\" (UID: \"204067d9-20d8-440f-88f4-57b6ce3a0ef1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.788869 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.812961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/793b5b1f-d882-4f05-be9f-7515433a91e7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-z5kbx\" (UID: \"793b5b1f-d882-4f05-be9f-7515433a91e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.822723 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.858502 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mpttg"] Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865508 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1d6caa5-f77a-4acf-a631-0c3abb84959c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865552 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1d6caa5-f77a-4acf-a631-0c3abb84959c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865581 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-certificates\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-bound-sa-token\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865714 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-tls\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865810 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865853 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-trusted-ca\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.865876 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgbh\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-kube-api-access-8vgbh\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: E0129 16:24:34.866213 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.366200629 +0000 UTC m=+158.274919901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.893072 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:34 crc kubenswrapper[4886]: W0129 16:24:34.902012 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb947565b_6a14_4bbd_881e_e82c33ca3a3b.slice/crio-cb33ac24972d3d5dba165317920577129d54d60d3420d9aec798c5982a6dac0a WatchSource:0}: Error finding container cb33ac24972d3d5dba165317920577129d54d60d3420d9aec798c5982a6dac0a: Status 404 returned error can't find the container with id cb33ac24972d3d5dba165317920577129d54d60d3420d9aec798c5982a6dac0a Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.912104 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.931651 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.966166 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.966501 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.966703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrzz\" (UniqueName: \"kubernetes.io/projected/99f63064-683c-4132-83b3-53480c64f426-kube-api-access-sqrzz\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:34 crc kubenswrapper[4886]: E0129 16:24:34.966773 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.466721844 +0000 UTC m=+158.375441116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.966877 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14647a71-8c69-4ae7-919a-fe0ef1684c1f-tmpfs\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.966990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967023 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc749cf3-40b6-4957-ac19-a5d6db460e00-proxy-tls\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967051 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltxf\" (UniqueName: \"kubernetes.io/projected/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-kube-api-access-gltxf\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967074 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f015af7b-346b-42a5-bea4-6f58b6ab41a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967094 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-signing-key\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967149 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmpd\" (UniqueName: \"kubernetes.io/projected/50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f-kube-api-access-2dmpd\") pod \"ingress-canary-2c5f9\" (UID: \"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f\") " pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967164 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/17aa0fcf-9538-4649-b9c8-0fdd6469c8da-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8qsrq\" (UID: \"17aa0fcf-9538-4649-b9c8-0fdd6469c8da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967262 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f-cert\") pod \"ingress-canary-2c5f9\" (UID: \"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f\") " pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967299 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgjh\" (UniqueName: \"kubernetes.io/projected/17accc89-e860-4b12-b5b3-3da7adaa3430-kube-api-access-fbgjh\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967357 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlngc\" (UniqueName: \"kubernetes.io/projected/fa66bb51-108f-4e13-b494-37450cdbd13f-kube-api-access-zlngc\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967390 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-mountpoint-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/643c5cab-3088-4021-a0ff-bb9e3c29326f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-socket-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:34 crc kubenswrapper[4886]: E0129 16:24:34.967731 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.467714093 +0000 UTC m=+158.376433365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967756 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7225de-b290-4181-83e8-7de96446822f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967779 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-metrics-certs\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.967927 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ttw\" (UniqueName: \"kubernetes.io/projected/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-kube-api-access-52ttw\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.968130 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-config\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.968201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99f63064-683c-4132-83b3-53480c64f426-machine-approver-tls\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.968301 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsvx7\" (UniqueName: \"kubernetes.io/projected/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-kube-api-access-fsvx7\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.975038 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.975501 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-serving-cert\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.975579 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvn6\" (UniqueName: \"kubernetes.io/projected/66990908-26a0-4a12-a85b-304c4ed052a9-kube-api-access-lvvn6\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.975658 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trcth\" (UniqueName: \"kubernetes.io/projected/8d938410-1a49-4580-9a4e-49de4bae378e-kube-api-access-trcth\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.976448 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-config-volume\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.976647 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-registration-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.976728 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc749cf3-40b6-4957-ac19-a5d6db460e00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.976786 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1d6caa5-f77a-4acf-a631-0c3abb84959c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.976999 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977026 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg6rc\" (UniqueName: \"kubernetes.io/projected/0af2647d-2354-4929-914e-623c44c12232-kube-api-access-dg6rc\") pod \"package-server-manager-789f6589d5-4pwcz\" (UID: \"0af2647d-2354-4929-914e-623c44c12232\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977077 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-bound-sa-token\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977216 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66990908-26a0-4a12-a85b-304c4ed052a9-config-volume\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw466\" (UniqueName: \"kubernetes.io/projected/bc749cf3-40b6-4957-ac19-a5d6db460e00-kube-api-access-qw466\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977536 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qf4\" (UniqueName: \"kubernetes.io/projected/643c5cab-3088-4021-a0ff-bb9e3c29326f-kube-api-access-67qf4\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977631 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-secret-volume\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977870 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7225de-b290-4181-83e8-7de96446822f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.977936 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-default-certificate\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.978116 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8x5n\" (UniqueName: \"kubernetes.io/projected/14647a71-8c69-4ae7-919a-fe0ef1684c1f-kube-api-access-c8x5n\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.978776 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7qc\" (UniqueName: \"kubernetes.io/projected/009f91e7-865b-400a-a879-4985c84b321c-kube-api-access-xh7qc\") pod \"control-plane-machine-set-operator-78cbb6b69f-l5v6d\" (UID: \"009f91e7-865b-400a-a879-4985c84b321c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.978928 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-tls\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.979144 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-trusted-ca\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.979224 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vgbh\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-kube-api-access-8vgbh\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.979274 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66990908-26a0-4a12-a85b-304c4ed052a9-metrics-tls\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.979352 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa66bb51-108f-4e13-b494-37450cdbd13f-node-bootstrap-token\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-csi-data-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-config\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980450 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f015af7b-346b-42a5-bea4-6f58b6ab41a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gpb\" (UniqueName: \"kubernetes.io/projected/c5c84483-6cc1-4f51-86e1-330250fcb1d0-kube-api-access-z9gpb\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980554 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhc29\" (UniqueName: \"kubernetes.io/projected/f015af7b-346b-42a5-bea4-6f58b6ab41a7-kube-api-access-fhc29\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980610 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14647a71-8c69-4ae7-919a-fe0ef1684c1f-webhook-cert\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980646 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99f63064-683c-4132-83b3-53480c64f426-auth-proxy-config\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980670 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/643c5cab-3088-4021-a0ff-bb9e3c29326f-srv-cert\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980699 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-plugins-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980721 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99f63064-683c-4132-83b3-53480c64f426-config\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980807 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa66bb51-108f-4e13-b494-37450cdbd13f-certs\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980834 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ttr\" (UniqueName: \"kubernetes.io/projected/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-kube-api-access-r6ttr\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980874 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f015af7b-346b-42a5-bea4-6f58b6ab41a7-images\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980898 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980918 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-signing-cabundle\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980941 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d938410-1a49-4580-9a4e-49de4bae378e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.980963 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1d6caa5-f77a-4acf-a631-0c3abb84959c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqb2p\" (UniqueName: \"kubernetes.io/projected/05a6a15e-b8e2-42b8-8e24-f891f348a835-kube-api-access-bqb2p\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-certificates\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981124 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zwm\" (UniqueName: \"kubernetes.io/projected/5d7225de-b290-4181-83e8-7de96446822f-kube-api-access-89zwm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981146 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14647a71-8c69-4ae7-919a-fe0ef1684c1f-apiservice-cert\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981193 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/009f91e7-865b-400a-a879-4985c84b321c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l5v6d\" (UID: \"009f91e7-865b-400a-a879-4985c84b321c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgj6\" (UniqueName: \"kubernetes.io/projected/17aa0fcf-9538-4649-b9c8-0fdd6469c8da-kube-api-access-rmgj6\") pod \"multus-admission-controller-857f4d67dd-8qsrq\" (UID: \"17aa0fcf-9538-4649-b9c8-0fdd6469c8da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981249 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-srv-cert\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981272 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-profile-collector-cert\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981302 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981320 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-stats-auth\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d938410-1a49-4580-9a4e-49de4bae378e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981396 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c84483-6cc1-4f51-86e1-330250fcb1d0-service-ca-bundle\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qc6\" (UniqueName: \"kubernetes.io/projected/43f861c0-d4a2-449e-b322-b92097bc56aa-kube-api-access-f2qc6\") pod \"migrator-59844c95c7-hjw5r\" (UID: \"43f861c0-d4a2-449e-b322-b92097bc56aa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981461 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0af2647d-2354-4929-914e-623c44c12232-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4pwcz\" (UID: \"0af2647d-2354-4929-914e-623c44c12232\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.981647 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-trusted-ca\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.982054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1d6caa5-f77a-4acf-a631-0c3abb84959c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.987965 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1d6caa5-f77a-4acf-a631-0c3abb84959c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.988597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-tls\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.990437 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-certificates\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.991449 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" Jan 29 16:24:34 crc kubenswrapper[4886]: I0129 16:24:34.992987 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.022298 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.024434 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-bound-sa-token\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.053548 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vgbh\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-kube-api-access-8vgbh\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083642 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083818 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw466\" (UniqueName: \"kubernetes.io/projected/bc749cf3-40b6-4957-ac19-a5d6db460e00-kube-api-access-qw466\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.083852 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.583829819 +0000 UTC m=+158.492549101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083881 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qf4\" (UniqueName: \"kubernetes.io/projected/643c5cab-3088-4021-a0ff-bb9e3c29326f-kube-api-access-67qf4\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083915 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-secret-volume\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7225de-b290-4181-83e8-7de96446822f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083962 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-default-certificate\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.083988 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8x5n\" (UniqueName: \"kubernetes.io/projected/14647a71-8c69-4ae7-919a-fe0ef1684c1f-kube-api-access-c8x5n\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084014 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7qc\" (UniqueName: \"kubernetes.io/projected/009f91e7-865b-400a-a879-4985c84b321c-kube-api-access-xh7qc\") pod \"control-plane-machine-set-operator-78cbb6b69f-l5v6d\" (UID: \"009f91e7-865b-400a-a879-4985c84b321c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66990908-26a0-4a12-a85b-304c4ed052a9-metrics-tls\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa66bb51-108f-4e13-b494-37450cdbd13f-node-bootstrap-token\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084092 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-csi-data-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084114 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-config\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f015af7b-346b-42a5-bea4-6f58b6ab41a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084161 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gpb\" (UniqueName: \"kubernetes.io/projected/c5c84483-6cc1-4f51-86e1-330250fcb1d0-kube-api-access-z9gpb\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhc29\" (UniqueName: \"kubernetes.io/projected/f015af7b-346b-42a5-bea4-6f58b6ab41a7-kube-api-access-fhc29\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084217 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14647a71-8c69-4ae7-919a-fe0ef1684c1f-webhook-cert\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99f63064-683c-4132-83b3-53480c64f426-auth-proxy-config\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/643c5cab-3088-4021-a0ff-bb9e3c29326f-srv-cert\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084282 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-plugins-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084305 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99f63064-683c-4132-83b3-53480c64f426-config\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084345 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa66bb51-108f-4e13-b494-37450cdbd13f-certs\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f015af7b-346b-42a5-bea4-6f58b6ab41a7-images\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084391 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-signing-cabundle\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084433 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ttr\" (UniqueName: \"kubernetes.io/projected/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-kube-api-access-r6ttr\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d938410-1a49-4580-9a4e-49de4bae378e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084475 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084500 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqb2p\" (UniqueName: \"kubernetes.io/projected/05a6a15e-b8e2-42b8-8e24-f891f348a835-kube-api-access-bqb2p\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zwm\" (UniqueName: \"kubernetes.io/projected/5d7225de-b290-4181-83e8-7de96446822f-kube-api-access-89zwm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14647a71-8c69-4ae7-919a-fe0ef1684c1f-apiservice-cert\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084572 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/009f91e7-865b-400a-a879-4985c84b321c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l5v6d\" (UID: \"009f91e7-865b-400a-a879-4985c84b321c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmgj6\" (UniqueName: \"kubernetes.io/projected/17aa0fcf-9538-4649-b9c8-0fdd6469c8da-kube-api-access-rmgj6\") pod \"multus-admission-controller-857f4d67dd-8qsrq\" (UID: \"17aa0fcf-9538-4649-b9c8-0fdd6469c8da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-srv-cert\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084640 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-profile-collector-cert\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084666 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084688 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-stats-auth\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084709 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d938410-1a49-4580-9a4e-49de4bae378e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c84483-6cc1-4f51-86e1-330250fcb1d0-service-ca-bundle\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2qc6\" (UniqueName: \"kubernetes.io/projected/43f861c0-d4a2-449e-b322-b92097bc56aa-kube-api-access-f2qc6\") pod \"migrator-59844c95c7-hjw5r\" (UID: \"43f861c0-d4a2-449e-b322-b92097bc56aa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0af2647d-2354-4929-914e-623c44c12232-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4pwcz\" (UID: \"0af2647d-2354-4929-914e-623c44c12232\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrzz\" (UniqueName: \"kubernetes.io/projected/99f63064-683c-4132-83b3-53480c64f426-kube-api-access-sqrzz\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084823 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14647a71-8c69-4ae7-919a-fe0ef1684c1f-tmpfs\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084853 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc749cf3-40b6-4957-ac19-a5d6db460e00-proxy-tls\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084905 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f015af7b-346b-42a5-bea4-6f58b6ab41a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084927 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltxf\" (UniqueName: \"kubernetes.io/projected/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-kube-api-access-gltxf\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.084952 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmpd\" (UniqueName: \"kubernetes.io/projected/50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f-kube-api-access-2dmpd\") pod \"ingress-canary-2c5f9\" (UID: \"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f\") " pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.085565 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7225de-b290-4181-83e8-7de96446822f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.086632 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99f63064-683c-4132-83b3-53480c64f426-config\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.086993 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-signing-key\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/17aa0fcf-9538-4649-b9c8-0fdd6469c8da-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8qsrq\" (UID: \"17aa0fcf-9538-4649-b9c8-0fdd6469c8da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087096 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f-cert\") pod \"ingress-canary-2c5f9\" (UID: \"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f\") " pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087154 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgjh\" (UniqueName: \"kubernetes.io/projected/17accc89-e860-4b12-b5b3-3da7adaa3430-kube-api-access-fbgjh\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlngc\" (UniqueName: \"kubernetes.io/projected/fa66bb51-108f-4e13-b494-37450cdbd13f-kube-api-access-zlngc\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-mountpoint-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087298 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/643c5cab-3088-4021-a0ff-bb9e3c29326f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087359 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-socket-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087396 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7225de-b290-4181-83e8-7de96446822f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.098401 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-metrics-certs\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.089182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-plugins-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.090276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-secret-volume\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.090526 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.090761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-signing-cabundle\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99f63064-683c-4132-83b3-53480c64f426-auth-proxy-config\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.090919 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-csi-data-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.091792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14647a71-8c69-4ae7-919a-fe0ef1684c1f-webhook-cert\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.093182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d938410-1a49-4580-9a4e-49de4bae378e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.095559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-config\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.095595 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-mountpoint-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.096095 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f015af7b-346b-42a5-bea4-6f58b6ab41a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.096263 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-socket-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.096912 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c84483-6cc1-4f51-86e1-330250fcb1d0-service-ca-bundle\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.098312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-default-certificate\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.087687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f015af7b-346b-42a5-bea4-6f58b6ab41a7-images\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.101533 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.601025856 +0000 UTC m=+158.509745128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.106622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52ttw\" (UniqueName: \"kubernetes.io/projected/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-kube-api-access-52ttw\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.104230 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14647a71-8c69-4ae7-919a-fe0ef1684c1f-tmpfs\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.102661 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7225de-b290-4181-83e8-7de96446822f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.107572 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-config\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.107661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99f63064-683c-4132-83b3-53480c64f426-machine-approver-tls\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.107704 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsvx7\" (UniqueName: \"kubernetes.io/projected/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-kube-api-access-fsvx7\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.107817 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-serving-cert\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.108309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-metrics-certs\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.108817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-signing-key\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.110573 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvn6\" (UniqueName: \"kubernetes.io/projected/66990908-26a0-4a12-a85b-304c4ed052a9-kube-api-access-lvvn6\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.110672 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trcth\" (UniqueName: \"kubernetes.io/projected/8d938410-1a49-4580-9a4e-49de4bae378e-kube-api-access-trcth\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-config-volume\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113465 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-registration-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113517 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc749cf3-40b6-4957-ac19-a5d6db460e00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113559 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113590 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg6rc\" (UniqueName: \"kubernetes.io/projected/0af2647d-2354-4929-914e-623c44c12232-kube-api-access-dg6rc\") pod \"package-server-manager-789f6589d5-4pwcz\" (UID: \"0af2647d-2354-4929-914e-623c44c12232\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113614 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66990908-26a0-4a12-a85b-304c4ed052a9-config-volume\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.113665 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.114309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66990908-26a0-4a12-a85b-304c4ed052a9-config-volume\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.114795 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c5c84483-6cc1-4f51-86e1-330250fcb1d0-stats-auth\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.115738 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-config-volume\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.115829 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05a6a15e-b8e2-42b8-8e24-f891f348a835-registration-dir\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.116372 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f-cert\") pod \"ingress-canary-2c5f9\" (UID: \"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f\") " pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.116972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-profile-collector-cert\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.120977 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc749cf3-40b6-4957-ac19-a5d6db460e00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.121892 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66990908-26a0-4a12-a85b-304c4ed052a9-metrics-tls\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.123948 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.124679 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0af2647d-2354-4929-914e-623c44c12232-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4pwcz\" (UID: \"0af2647d-2354-4929-914e-623c44c12232\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.125240 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/17aa0fcf-9538-4649-b9c8-0fdd6469c8da-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8qsrq\" (UID: \"17aa0fcf-9538-4649-b9c8-0fdd6469c8da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.127216 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc749cf3-40b6-4957-ac19-a5d6db460e00-proxy-tls\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.128890 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99f63064-683c-4132-83b3-53480c64f426-machine-approver-tls\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.129495 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-serving-cert\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.129803 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f015af7b-346b-42a5-bea4-6f58b6ab41a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.130142 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/643c5cab-3088-4021-a0ff-bb9e3c29326f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.130601 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/643c5cab-3088-4021-a0ff-bb9e3c29326f-srv-cert\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.131971 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/009f91e7-865b-400a-a879-4985c84b321c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l5v6d\" (UID: \"009f91e7-865b-400a-a879-4985c84b321c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.132500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-srv-cert\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.135781 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fa66bb51-108f-4e13-b494-37450cdbd13f-certs\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.136647 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fa66bb51-108f-4e13-b494-37450cdbd13f-node-bootstrap-token\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.139002 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d938410-1a49-4580-9a4e-49de4bae378e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.139089 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-config\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.139247 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14647a71-8c69-4ae7-919a-fe0ef1684c1f-apiservice-cert\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.149662 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw466\" (UniqueName: \"kubernetes.io/projected/bc749cf3-40b6-4957-ac19-a5d6db460e00-kube-api-access-qw466\") pod \"machine-config-controller-84d6567774-plhr2\" (UID: \"bc749cf3-40b6-4957-ac19-a5d6db460e00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.158962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qf4\" (UniqueName: \"kubernetes.io/projected/643c5cab-3088-4021-a0ff-bb9e3c29326f-kube-api-access-67qf4\") pod \"olm-operator-6b444d44fb-p42xx\" (UID: \"643c5cab-3088-4021-a0ff-bb9e3c29326f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.164132 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.175216 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ttr\" (UniqueName: \"kubernetes.io/projected/1d4089f1-878b-4fc4-b0ff-52a713c3b9ab-kube-api-access-r6ttr\") pod \"service-ca-9c57cc56f-f2q4h\" (UID: \"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab\") " pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.182622 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.185980 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hvwx7"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.194540 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.194627 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.197916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhc29\" (UniqueName: \"kubernetes.io/projected/f015af7b-346b-42a5-bea4-6f58b6ab41a7-kube-api-access-fhc29\") pod \"machine-config-operator-74547568cd-kr4cn\" (UID: \"f015af7b-346b-42a5-bea4-6f58b6ab41a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.204416 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrzz\" (UniqueName: \"kubernetes.io/projected/99f63064-683c-4132-83b3-53480c64f426-kube-api-access-sqrzz\") pod \"machine-approver-56656f9798-m2x88\" (UID: \"99f63064-683c-4132-83b3-53480c64f426\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.216091 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.216649 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.716624646 +0000 UTC m=+158.625343918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.242378 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlngc\" (UniqueName: \"kubernetes.io/projected/fa66bb51-108f-4e13-b494-37450cdbd13f-kube-api-access-zlngc\") pod \"machine-config-server-dddt4\" (UID: \"fa66bb51-108f-4e13-b494-37450cdbd13f\") " pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.251272 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgjh\" (UniqueName: \"kubernetes.io/projected/17accc89-e860-4b12-b5b3-3da7adaa3430-kube-api-access-fbgjh\") pod \"marketplace-operator-79b997595-w8bm4\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.291897 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zwm\" (UniqueName: \"kubernetes.io/projected/5d7225de-b290-4181-83e8-7de96446822f-kube-api-access-89zwm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fkbjz\" (UID: \"5d7225de-b290-4181-83e8-7de96446822f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.308224 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7qc\" (UniqueName: \"kubernetes.io/projected/009f91e7-865b-400a-a879-4985c84b321c-kube-api-access-xh7qc\") pod \"control-plane-machine-set-operator-78cbb6b69f-l5v6d\" (UID: \"009f91e7-865b-400a-a879-4985c84b321c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.313448 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dddt4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.314638 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-frztl"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.320965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.321385 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.821364476 +0000 UTC m=+158.730083818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.332388 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8x5n\" (UniqueName: \"kubernetes.io/projected/14647a71-8c69-4ae7-919a-fe0ef1684c1f-kube-api-access-c8x5n\") pod \"packageserver-d55dfcdfc-ssftv\" (UID: \"14647a71-8c69-4ae7-919a-fe0ef1684c1f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.346705 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fgmg6"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.348237 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gpb\" (UniqueName: \"kubernetes.io/projected/c5c84483-6cc1-4f51-86e1-330250fcb1d0-kube-api-access-z9gpb\") pod \"router-default-5444994796-zrg4t\" (UID: \"c5c84483-6cc1-4f51-86e1-330250fcb1d0\") " pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.372072 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.381690 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.381840 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmgj6\" (UniqueName: \"kubernetes.io/projected/17aa0fcf-9538-4649-b9c8-0fdd6469c8da-kube-api-access-rmgj6\") pod \"multus-admission-controller-857f4d67dd-8qsrq\" (UID: \"17aa0fcf-9538-4649-b9c8-0fdd6469c8da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.382665 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4rg2h"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.387082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2qc6\" (UniqueName: \"kubernetes.io/projected/43f861c0-d4a2-449e-b322-b92097bc56aa-kube-api-access-f2qc6\") pod \"migrator-59844c95c7-hjw5r\" (UID: \"43f861c0-d4a2-449e-b322-b92097bc56aa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" Jan 29 16:24:35 crc kubenswrapper[4886]: W0129 16:24:35.389773 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3510e180_be29_469c_bfa0_b06702f80c93.slice/crio-e1026a5853033841b4330ae054119019b196dfd01d843c6bd8efe15aa73e26c0 WatchSource:0}: Error finding container e1026a5853033841b4330ae054119019b196dfd01d843c6bd8efe15aa73e26c0: Status 404 returned error can't find the container with id e1026a5853033841b4330ae054119019b196dfd01d843c6bd8efe15aa73e26c0 Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.395725 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.404855 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.406699 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.408226 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqb2p\" (UniqueName: \"kubernetes.io/projected/05a6a15e-b8e2-42b8-8e24-f891f348a835-kube-api-access-bqb2p\") pod \"csi-hostpathplugin-jfbvx\" (UID: \"05a6a15e-b8e2-42b8-8e24-f891f348a835\") " pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.416668 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.419617 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.423025 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.423463 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:35.923442398 +0000 UTC m=+158.832161670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.427353 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.430470 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltxf\" (UniqueName: \"kubernetes.io/projected/5ad09ea7-63c0-4583-acb7-da4ce7f694f4-kube-api-access-gltxf\") pod \"service-ca-operator-777779d784-z4r4v\" (UID: \"5ad09ea7-63c0-4583-acb7-da4ce7f694f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.430867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" Jan 29 16:24:35 crc kubenswrapper[4886]: W0129 16:24:35.435042 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbffd7e9c_5274_4e27_b5d9_7e23ae3cbfbc.slice/crio-3ed83d7005d7e095515ad911c2d110e9def318dc1d43fa77b6c6bb0db30a2290 WatchSource:0}: Error finding container 3ed83d7005d7e095515ad911c2d110e9def318dc1d43fa77b6c6bb0db30a2290: Status 404 returned error can't find the container with id 3ed83d7005d7e095515ad911c2d110e9def318dc1d43fa77b6c6bb0db30a2290 Jan 29 16:24:35 crc kubenswrapper[4886]: W0129 16:24:35.445145 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb068b0a_4b6b_48b7_bae4_ab193394f299.slice/crio-5b391d085c08e1c1dfac270a21f6cff67072029830c3d61c34b03a6c51728f7e WatchSource:0}: Error finding container 5b391d085c08e1c1dfac270a21f6cff67072029830c3d61c34b03a6c51728f7e: Status 404 returned error can't find the container with id 5b391d085c08e1c1dfac270a21f6cff67072029830c3d61c34b03a6c51728f7e Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.450387 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmpd\" (UniqueName: \"kubernetes.io/projected/50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f-kube-api-access-2dmpd\") pod \"ingress-canary-2c5f9\" (UID: \"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f\") " pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.460053 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.465264 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsvx7\" (UniqueName: \"kubernetes.io/projected/b7f6ff84-95a3-4119-b688-1d28cc3fc4b8-kube-api-access-fsvx7\") pod \"catalog-operator-68c6474976-24n77\" (UID: \"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.467809 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.489900 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvn6\" (UniqueName: \"kubernetes.io/projected/66990908-26a0-4a12-a85b-304c4ed052a9-kube-api-access-lvvn6\") pod \"dns-default-76mxm\" (UID: \"66990908-26a0-4a12-a85b-304c4ed052a9\") " pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.502374 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.512675 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.514658 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trcth\" (UniqueName: \"kubernetes.io/projected/8d938410-1a49-4580-9a4e-49de4bae378e-kube-api-access-trcth\") pod \"kube-storage-version-migrator-operator-b67b599dd-n5wvz\" (UID: \"8d938410-1a49-4580-9a4e-49de4bae378e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.518550 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.524086 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" event={"ID":"eb068b0a-4b6b-48b7-bae4-ab193394f299","Type":"ContainerStarted","Data":"5b391d085c08e1c1dfac270a21f6cff67072029830c3d61c34b03a6c51728f7e"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.524536 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.525705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" event={"ID":"a8ec6d15-494f-427c-b532-adebe8e9d910","Type":"ContainerStarted","Data":"177c907cf44b709809432726526a3ecd5e357d7697b006aa9939f1af948a6b50"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.525839 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af8ac9dd-fc42-4e30-b840-c7f5ad734bea-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-blldt\" (UID: \"af8ac9dd-fc42-4e30-b840-c7f5ad734bea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.526383 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.026361234 +0000 UTC m=+158.935080506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.527415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-frztl" event={"ID":"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5","Type":"ContainerStarted","Data":"f5f1eb8dc3efdd72b68491a7af9fe6df247f17abe7404590089aab88c87a64e1"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.538535 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.539089 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" event={"ID":"324f040b-716b-41ff-80af-acd92d47a95d","Type":"ContainerStarted","Data":"d92a48a036d2665758ece18be1ba785dd640d6a821c41807f760cd312836c088"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.539129 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" event={"ID":"324f040b-716b-41ff-80af-acd92d47a95d","Type":"ContainerStarted","Data":"ae8697c77249b0329833c57df721798c8763199c1b37a54b90a24e6d706049ec"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.543303 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dddt4" event={"ID":"fa66bb51-108f-4e13-b494-37450cdbd13f","Type":"ContainerStarted","Data":"b5280765d896a54ebcce9bb4737c608d7bb18a83b8c2ea2a1077cad298b0bbba"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.547995 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.548701 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" event={"ID":"4d5118e4-db44-4e09-a04d-2036e251936b","Type":"ContainerStarted","Data":"6fff8a070d1d246b9de78c2701294ccd82667531237f5c020ada5028f01e8438"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.549023 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52ttw\" (UniqueName: \"kubernetes.io/projected/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-kube-api-access-52ttw\") pod \"collect-profiles-29495055-bkqmf\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.557812 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.571686 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.576975 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg6rc\" (UniqueName: \"kubernetes.io/projected/0af2647d-2354-4929-914e-623c44c12232-kube-api-access-dg6rc\") pod \"package-server-manager-789f6589d5-4pwcz\" (UID: \"0af2647d-2354-4929-914e-623c44c12232\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.583149 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" event={"ID":"b8672426-860f-4c9e-a776-094b8df786a2","Type":"ContainerStarted","Data":"500540e545f40884cf29549f7972ea23fb5cfbc9dfe655fc0e53b0895c35cbd7"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.592630 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.598665 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2c5f9" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.602898 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.603032 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bj8hg"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.607190 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" event={"ID":"b947565b-6a14-4bbd-881e-e82c33ca3a3b","Type":"ContainerStarted","Data":"8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.607248 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.607260 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" event={"ID":"b947565b-6a14-4bbd-881e-e82c33ca3a3b","Type":"ContainerStarted","Data":"cb33ac24972d3d5dba165317920577129d54d60d3420d9aec798c5982a6dac0a"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.613498 4886 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mpttg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.613545 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" podUID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.622484 4886 generic.go:334] "Generic (PLEG): container finished" podID="1d35f633-a6e9-4890-8c3f-ec87291ac03f" containerID="c1794b34cd19775f2f271d86367ee081ebf1d9b35ee62528df68ea6408f5435b" exitCode=0 Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.622785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" event={"ID":"1d35f633-a6e9-4890-8c3f-ec87291ac03f","Type":"ContainerDied","Data":"c1794b34cd19775f2f271d86367ee081ebf1d9b35ee62528df68ea6408f5435b"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.622816 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.622830 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" event={"ID":"1d35f633-a6e9-4890-8c3f-ec87291ac03f","Type":"ContainerStarted","Data":"3216f18f58345c39bce89fb2a1e4db3ebcc909efd668b0447962e1becd9e577a"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.647289 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wczvq"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.653896 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" event={"ID":"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc","Type":"ContainerStarted","Data":"3ed83d7005d7e095515ad911c2d110e9def318dc1d43fa77b6c6bb0db30a2290"} Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.661267 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.662762 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.162743227 +0000 UTC m=+159.071462489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.662940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.665166 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" event={"ID":"3510e180-be29-469c-bfa0-b06702f80c93","Type":"ContainerStarted","Data":"e1026a5853033841b4330ae054119019b196dfd01d843c6bd8efe15aa73e26c0"} Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.665264 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.165243681 +0000 UTC m=+159.073962963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.703278 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.737801 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.743274 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.751824 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.757008 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x62jn"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.760971 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zjtrn"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.765709 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.766202 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.266182299 +0000 UTC m=+159.174901561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.777153 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.824373 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:35 crc kubenswrapper[4886]: W0129 16:24:35.858765 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99f63064_683c_4132_83b3_53480c64f426.slice/crio-c1414ebe590b4d22073dcaf5395c6c6b6cd5c941898e5efc51d05d849679f931 WatchSource:0}: Error finding container c1414ebe590b4d22073dcaf5395c6c6b6cd5c941898e5efc51d05d849679f931: Status 404 returned error can't find the container with id c1414ebe590b4d22073dcaf5395c6c6b6cd5c941898e5efc51d05d849679f931 Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.867042 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.867399 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.367386764 +0000 UTC m=+159.276106036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.888008 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz"] Jan 29 16:24:35 crc kubenswrapper[4886]: I0129 16:24:35.968712 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:35 crc kubenswrapper[4886]: E0129 16:24:35.969738 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.469702892 +0000 UTC m=+159.378422154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.006857 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.057138 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.071644 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.071911 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.571899627 +0000 UTC m=+159.480618899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.176798 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.178093 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.678045139 +0000 UTC m=+159.586764411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.280176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.280801 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.780778729 +0000 UTC m=+159.689498001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.382578 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.383430 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:36.883407947 +0000 UTC m=+159.792127219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.484768 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8qsrq"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.501503 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.501747 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.001662426 +0000 UTC m=+159.910381688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.511622 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f2q4h"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.513968 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.570917 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jfbvx"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.596032 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.606757 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.609741 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.109691022 +0000 UTC m=+160.018410294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.677870 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.681058 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" event={"ID":"b8672426-860f-4c9e-a776-094b8df786a2","Type":"ContainerStarted","Data":"9d3fc88d56ed9c5ae5e495d4f98aaaa3b14f584441022d87259d65fb3bc5de02"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.681090 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.713004 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" event={"ID":"79844037-42b5-456b-acbd-45fc61f251d9","Type":"ContainerStarted","Data":"d4bc0a4c1597a24607a9707ec589f3827097672ea8f187a48e9921b3855b8d43"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.722106 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" event={"ID":"d42a606f-2b2f-4782-ba98-15d8662eb3a9","Type":"ContainerStarted","Data":"669c1974cb87cc3278de89eb4c0a65355f1e5f44c47cb63b48e3d5bd3ab922c9"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.727359 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.729043 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.229029513 +0000 UTC m=+160.137748785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.729692 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt"] Jan 29 16:24:36 crc kubenswrapper[4886]: W0129 16:24:36.779051 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14647a71_8c69_4ae7_919a_fe0ef1684c1f.slice/crio-03bc03bc771fe9d4ba9559b7f79d6ae8d3279429eb2d054e056060a61efb1ef7 WatchSource:0}: Error finding container 03bc03bc771fe9d4ba9559b7f79d6ae8d3279429eb2d054e056060a61efb1ef7: Status 404 returned error can't find the container with id 03bc03bc771fe9d4ba9559b7f79d6ae8d3279429eb2d054e056060a61efb1ef7 Jan 29 16:24:36 crc kubenswrapper[4886]: W0129 16:24:36.780084 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d4089f1_878b_4fc4_b0ff_52a713c3b9ab.slice/crio-b476d5316f99542fc5130b59e47fb58c1e303ec0e6f6323cfaea0d3b81c01a39 WatchSource:0}: Error finding container b476d5316f99542fc5130b59e47fb58c1e303ec0e6f6323cfaea0d3b81c01a39: Status 404 returned error can't find the container with id b476d5316f99542fc5130b59e47fb58c1e303ec0e6f6323cfaea0d3b81c01a39 Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.780248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" event={"ID":"7c5463e2-9818-4a5e-8dd0-36cd4c78d749","Type":"ContainerStarted","Data":"c87647539e17a7f29411a9b217c24941ac92c7d64b18e452e61f47c3c6e466b3"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.821303 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" event={"ID":"99f63064-683c-4132-83b3-53480c64f426","Type":"ContainerStarted","Data":"c1414ebe590b4d22073dcaf5395c6c6b6cd5c941898e5efc51d05d849679f931"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.828030 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.828378 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.328361423 +0000 UTC m=+160.237080695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.872300 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-76mxm"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.872350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" event={"ID":"bc749cf3-40b6-4957-ac19-a5d6db460e00","Type":"ContainerStarted","Data":"bb4d8856436bbf1b847b63466a6912f86871592a4e84151d8c633da0116c3b07"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.895731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-frztl" event={"ID":"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5","Type":"ContainerStarted","Data":"1b0d59f7a0b0f2503aadbe69a4ed4abbcb0da9a1640279030e487d1ecaa3fce8"} Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.940515 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:36 crc kubenswrapper[4886]: E0129 16:24:36.943578 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.443555062 +0000 UTC m=+160.352274344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.979626 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2c5f9"] Jan 29 16:24:36 crc kubenswrapper[4886]: I0129 16:24:36.997606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dddt4" event={"ID":"fa66bb51-108f-4e13-b494-37450cdbd13f","Type":"ContainerStarted","Data":"1f04c9de1b241169b03aefc19e6de98b311964ea3de2fa7bf35831699c046ec7"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.042200 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-frztl" podStartSLOduration=129.042178421 podStartE2EDuration="2m9.042178421s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:36.996849204 +0000 UTC m=+159.905568486" watchObservedRunningTime="2026-01-29 16:24:37.042178421 +0000 UTC m=+159.950897693" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.042655 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.043974 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.543954323 +0000 UTC m=+160.452673595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.046553 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" event={"ID":"009f91e7-865b-400a-a879-4985c84b321c","Type":"ContainerStarted","Data":"86ff4dc6a16ed3dd7b56175d7d149ee22a1e62e6c28287b7892dae52e78b6df9"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.068076 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" podStartSLOduration=129.068057424 podStartE2EDuration="2m9.068057424s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.067683143 +0000 UTC m=+159.976402415" watchObservedRunningTime="2026-01-29 16:24:37.068057424 +0000 UTC m=+159.976776696" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.071350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" event={"ID":"e3790628-7588-42bf-ace6-04e2a0f1a09a","Type":"ContainerStarted","Data":"4b354e61d8cd39a5c57be5bb7e5fdf329f78fcf82487790f8414af0afc7a12fe"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.108860 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" event={"ID":"a8ec6d15-494f-427c-b532-adebe8e9d910","Type":"ContainerStarted","Data":"4a65c42e7422b8780bac5007a89c45eb9889d927e445f4d0f95e81638ed6746d"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.127005 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" event={"ID":"793b5b1f-d882-4f05-be9f-7515433a91e7","Type":"ContainerStarted","Data":"26842e055859d7685eacde32cca656b37a7f22b9f6b0d9bb0aff194e6b682a6d"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.127073 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" event={"ID":"793b5b1f-d882-4f05-be9f-7515433a91e7","Type":"ContainerStarted","Data":"7c8ccf4128f6fdedfb69f3de6b9dd82594ea1327583a3ec48469189f53962ed3"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.134313 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" event={"ID":"5d7225de-b290-4181-83e8-7de96446822f","Type":"ContainerStarted","Data":"b4217494d901404a0e57cfe7616f29324469ddcf26c56731948cbe55944094c5"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.134453 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5l855" podStartSLOduration=129.134427822 podStartE2EDuration="2m9.134427822s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.10927872 +0000 UTC m=+160.017997982" watchObservedRunningTime="2026-01-29 16:24:37.134427822 +0000 UTC m=+160.043147094" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.139905 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wczvq" event={"ID":"d677ab93-2fac-4612-8558-8ffc559d5247","Type":"ContainerStarted","Data":"f30ee225cc5adaa8b78485076009c2bc48f7004ccd61426cd53ed9df7f2bb813"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.141693 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz"] Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.142074 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.144800 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.145098 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.645085907 +0000 UTC m=+160.553805179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.145128 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bm4"] Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.145803 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ghfg9" podStartSLOduration=129.145783087 podStartE2EDuration="2m9.145783087s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.137539014 +0000 UTC m=+160.046258286" watchObservedRunningTime="2026-01-29 16:24:37.145783087 +0000 UTC m=+160.054502359" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.147875 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.147939 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.153642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" event={"ID":"204067d9-20d8-440f-88f4-57b6ce3a0ef1","Type":"ContainerStarted","Data":"1fb9aa1b80ea89057bce1e7439e8e9b8bf01b450e2ff768d21d19973cee0ab97"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.165576 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dddt4" podStartSLOduration=5.16553161 podStartE2EDuration="5.16553161s" podCreationTimestamp="2026-01-29 16:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.164816989 +0000 UTC m=+160.073536271" watchObservedRunningTime="2026-01-29 16:24:37.16553161 +0000 UTC m=+160.074250882" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.168246 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zrg4t" event={"ID":"c5c84483-6cc1-4f51-86e1-330250fcb1d0","Type":"ContainerStarted","Data":"359b0628e5ecd91f53bfba5ff0bb026f02d63f0a692abe303fb23caa59cd98d8"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.174682 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" event={"ID":"9e017d9d-e6ec-4917-b888-987be0ce0523","Type":"ContainerStarted","Data":"9ddf6da4bcef92e4cb24c4cb7e4c8e023592c8430d76588b98d9cad3f5c9318f"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.174721 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" event={"ID":"9e017d9d-e6ec-4917-b888-987be0ce0523","Type":"ContainerStarted","Data":"5271bf27a593007c64c569093a7f198b0d1e9d4d9e2bfed961f9db0e882a02aa"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.203244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" event={"ID":"ded0e679-6bf1-4d45-a59f-2c1b89bed863","Type":"ContainerStarted","Data":"dbba8a2764036b4443319fc148b78610ceda1be43b366379c5050cba6a63b4d1"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.210748 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hvwx7" podStartSLOduration=129.210725233 podStartE2EDuration="2m9.210725233s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.203554172 +0000 UTC m=+160.112273444" watchObservedRunningTime="2026-01-29 16:24:37.210725233 +0000 UTC m=+160.119444505" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.214169 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" event={"ID":"43f861c0-d4a2-449e-b322-b92097bc56aa","Type":"ContainerStarted","Data":"6f0f983ef2b6681bd68fb689bc02e428b7f23cc8ab0456f0548e6f2da9ccafa7"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.220786 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" event={"ID":"3510e180-be29-469c-bfa0-b06702f80c93","Type":"ContainerStarted","Data":"3bda619ba48505389282db245e0a7774985507f6f49bb36bd81707c575636f20"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.223466 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz"] Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.229525 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" event={"ID":"643c5cab-3088-4021-a0ff-bb9e3c29326f","Type":"ContainerStarted","Data":"c26150638bd4487d09ca678724b5990748d6f426394af92e850a862100afd00d"} Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.246786 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.248691 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.748668422 +0000 UTC m=+160.657387694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.249883 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.250937 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wjnz" podStartSLOduration=129.250911899 podStartE2EDuration="2m9.250911899s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.249363763 +0000 UTC m=+160.158083035" watchObservedRunningTime="2026-01-29 16:24:37.250911899 +0000 UTC m=+160.159631171" Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.273870 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf"] Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.284251 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wczvq" podStartSLOduration=129.284230932 podStartE2EDuration="2m9.284230932s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:37.283576032 +0000 UTC m=+160.192295314" watchObservedRunningTime="2026-01-29 16:24:37.284230932 +0000 UTC m=+160.192950204" Jan 29 16:24:37 crc kubenswrapper[4886]: W0129 16:24:37.319939 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d938410_1a49_4580_9a4e_49de4bae378e.slice/crio-c242bf7423006a4a20c31e03cdab653fef5f18e3d8a4fdcc6f40eede027f7f00 WatchSource:0}: Error finding container c242bf7423006a4a20c31e03cdab653fef5f18e3d8a4fdcc6f40eede027f7f00: Status 404 returned error can't find the container with id c242bf7423006a4a20c31e03cdab653fef5f18e3d8a4fdcc6f40eede027f7f00 Jan 29 16:24:37 crc kubenswrapper[4886]: W0129 16:24:37.336821 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7a20685_8c41_4c3b_9b91_fe1e05cf5fe9.slice/crio-0d20bb5551ca7feda7d1ab34d809d68e52dc7cfd3aa9abdfcd5789f0817ad288 WatchSource:0}: Error finding container 0d20bb5551ca7feda7d1ab34d809d68e52dc7cfd3aa9abdfcd5789f0817ad288: Status 404 returned error can't find the container with id 0d20bb5551ca7feda7d1ab34d809d68e52dc7cfd3aa9abdfcd5789f0817ad288 Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.350505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.355573 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.855550195 +0000 UTC m=+160.764269647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.455974 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.458062 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:37.958031938 +0000 UTC m=+160.866751210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.560635 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.561273 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.061252663 +0000 UTC m=+160.969971925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.663611 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.664169 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.164146448 +0000 UTC m=+161.072865720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.765882 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.766402 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.266381194 +0000 UTC m=+161.175100466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.868526 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.868901 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.368880448 +0000 UTC m=+161.277599740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:37 crc kubenswrapper[4886]: I0129 16:24:37.970644 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:37 crc kubenswrapper[4886]: E0129 16:24:37.971541 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.471507605 +0000 UTC m=+161.380226877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.075618 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.076150 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.576119692 +0000 UTC m=+161.484838964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.178098 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.179020 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.679005317 +0000 UTC m=+161.587724599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.241476 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-76mxm" event={"ID":"66990908-26a0-4a12-a85b-304c4ed052a9","Type":"ContainerStarted","Data":"6c6968f298c44bf6ddb6ddd2d4e336cf5458674c89288366a7c81a8052808202"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.241539 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-76mxm" event={"ID":"66990908-26a0-4a12-a85b-304c4ed052a9","Type":"ContainerStarted","Data":"a293bf0b5ff504bf87e37397912f1d1e1062130bccb0b700b0c1fd27ae4e5bee"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.247316 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" event={"ID":"17aa0fcf-9538-4649-b9c8-0fdd6469c8da","Type":"ContainerStarted","Data":"4ab9d716bc75f7b92f2e4829c09daea7397c73fc67ecea0a5543d582893e3843"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.247386 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" event={"ID":"17aa0fcf-9538-4649-b9c8-0fdd6469c8da","Type":"ContainerStarted","Data":"1b0ede434448fe02a5f4d66c495253d3311534b53928516d64608cfc58ed010b"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.249688 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" event={"ID":"d42a606f-2b2f-4782-ba98-15d8662eb3a9","Type":"ContainerStarted","Data":"1169eb0a87fa42c25aacc55b494d250d27857557e5c73d30b5dbdb59edc77a62"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.252710 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2c5f9" event={"ID":"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f","Type":"ContainerStarted","Data":"d3fb0c57b19842bc9184a0080bdc8c68c89353e3668eac7c358e0d83309f34f0"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.252761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2c5f9" event={"ID":"50bf9e5e-0f33-48d1-ac4f-8da7cc905b6f","Type":"ContainerStarted","Data":"9ccc11f892163dc7024561fee2872f3d166f0584818e8dbe1aa56e1672e0c7fb"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.256035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" event={"ID":"14647a71-8c69-4ae7-919a-fe0ef1684c1f","Type":"ContainerStarted","Data":"ffc51645989a1f5c1b3eb400293dc43481ae2e86713123a847ee2abc395d0769"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.256110 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" event={"ID":"14647a71-8c69-4ae7-919a-fe0ef1684c1f","Type":"ContainerStarted","Data":"03bc03bc771fe9d4ba9559b7f79d6ae8d3279429eb2d054e056060a61efb1ef7"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.257364 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.263282 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" event={"ID":"3510e180-be29-469c-bfa0-b06702f80c93","Type":"ContainerStarted","Data":"9ba1bbb1332dc5aac3a962be43ed4367129398efda0aa27fe08095e075ace1e8"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.268456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" event={"ID":"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9","Type":"ContainerStarted","Data":"e24030b3765055e623ca669573f5fe2306c10abdab283e014f331f200998a684"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.268768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" event={"ID":"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9","Type":"ContainerStarted","Data":"0d20bb5551ca7feda7d1ab34d809d68e52dc7cfd3aa9abdfcd5789f0817ad288"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.274015 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2c5f9" podStartSLOduration=6.273994839 podStartE2EDuration="6.273994839s" podCreationTimestamp="2026-01-29 16:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.271351201 +0000 UTC m=+161.180070473" watchObservedRunningTime="2026-01-29 16:24:38.273994839 +0000 UTC m=+161.182714111" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.275580 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ssftv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.275699 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" podUID="14647a71-8c69-4ae7-919a-fe0ef1684c1f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.280936 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.281227 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.781209792 +0000 UTC m=+161.689929064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.302703 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" podStartSLOduration=129.302682785 podStartE2EDuration="2m9.302682785s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.301776339 +0000 UTC m=+161.210495621" watchObservedRunningTime="2026-01-29 16:24:38.302682785 +0000 UTC m=+161.211402077" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.303419 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" event={"ID":"f015af7b-346b-42a5-bea4-6f58b6ab41a7","Type":"ContainerStarted","Data":"c899750370e4d1e32d9bfb0aeac3516906256b7caf9fbbbd878ccab82c7cca43"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.303565 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" event={"ID":"f015af7b-346b-42a5-bea4-6f58b6ab41a7","Type":"ContainerStarted","Data":"233993d0cfab4111592662d769ed7a8385498dc61b98ca2a663571d4b9e8d2b8"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.303678 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" event={"ID":"f015af7b-346b-42a5-bea4-6f58b6ab41a7","Type":"ContainerStarted","Data":"30882df3cd136d02d24f63cd99ce372935c769bb83f388526e3a1595910e9cc8"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.325898 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgmg6" podStartSLOduration=129.32588413 podStartE2EDuration="2m9.32588413s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.324633213 +0000 UTC m=+161.233352485" watchObservedRunningTime="2026-01-29 16:24:38.32588413 +0000 UTC m=+161.234603402" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.338993 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" event={"ID":"ded0e679-6bf1-4d45-a59f-2c1b89bed863","Type":"ContainerStarted","Data":"dcf2cf6a9a7edb7f8fa44982eee1269e55fb3b78f9fa8569386dada24c0ce4de"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.339060 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" event={"ID":"ded0e679-6bf1-4d45-a59f-2c1b89bed863","Type":"ContainerStarted","Data":"6ff08fc1ad9117a63a1601acaae933032b1cce6624116b14a9eb4416174fc6d6"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.345713 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" podStartSLOduration=130.345685954 podStartE2EDuration="2m10.345685954s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.344976653 +0000 UTC m=+161.253695925" watchObservedRunningTime="2026-01-29 16:24:38.345685954 +0000 UTC m=+161.254405226" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.348522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" event={"ID":"8d938410-1a49-4580-9a4e-49de4bae378e","Type":"ContainerStarted","Data":"24d1b89bc1a41d8bf4a3f01715e42dad77a7a55b3bcb3aef68fa62249fa09414"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.348571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" event={"ID":"8d938410-1a49-4580-9a4e-49de4bae378e","Type":"ContainerStarted","Data":"c242bf7423006a4a20c31e03cdab653fef5f18e3d8a4fdcc6f40eede027f7f00"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.351651 4886 generic.go:334] "Generic (PLEG): container finished" podID="bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc" containerID="81ad64eaf119ac3b7e71156729526eba3e2744fdb6389b7f1a473cf082bf4679" exitCode=0 Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.351700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" event={"ID":"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc","Type":"ContainerDied","Data":"81ad64eaf119ac3b7e71156729526eba3e2744fdb6389b7f1a473cf082bf4679"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.356535 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" event={"ID":"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab","Type":"ContainerStarted","Data":"5f3de6f436ae6ac2aea7be5a87a7328004b6bc37f214a2654d6f31a9bd5972b4"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.356566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" event={"ID":"1d4089f1-878b-4fc4-b0ff-52a713c3b9ab","Type":"ContainerStarted","Data":"b476d5316f99542fc5130b59e47fb58c1e303ec0e6f6323cfaea0d3b81c01a39"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.371318 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kr4cn" podStartSLOduration=130.371259158 podStartE2EDuration="2m10.371259158s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.365963392 +0000 UTC m=+161.274682664" watchObservedRunningTime="2026-01-29 16:24:38.371259158 +0000 UTC m=+161.279978440" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.384254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.385817 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.885796717 +0000 UTC m=+161.794515989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.390369 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pgq49" podStartSLOduration=130.390346151 podStartE2EDuration="2m10.390346151s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.387113596 +0000 UTC m=+161.295832868" watchObservedRunningTime="2026-01-29 16:24:38.390346151 +0000 UTC m=+161.299065453" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.420178 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" event={"ID":"643c5cab-3088-4021-a0ff-bb9e3c29326f","Type":"ContainerStarted","Data":"799152b5432c00ab22530781404aae27ea11a1bc2d592a1411ff5812bc11ae73"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.421028 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.422342 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p42xx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.423906 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" podUID="643c5cab-3088-4021-a0ff-bb9e3c29326f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.447679 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" event={"ID":"5ad09ea7-63c0-4583-acb7-da4ce7f694f4","Type":"ContainerStarted","Data":"95e1e5a0f148a065ddcb34c026a7ea21ce8ec478af86b9b6ea59385cfd5ac178"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.447746 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" event={"ID":"5ad09ea7-63c0-4583-acb7-da4ce7f694f4","Type":"ContainerStarted","Data":"80802df0f67fdee29b100ca2bed03d3ab5d8e65692e0899cb9e79c2680905751"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.450734 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n5wvz" podStartSLOduration=129.450702362 podStartE2EDuration="2m9.450702362s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.421295404 +0000 UTC m=+161.330014676" watchObservedRunningTime="2026-01-29 16:24:38.450702362 +0000 UTC m=+161.359421634" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.456610 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" event={"ID":"5d7225de-b290-4181-83e8-7de96446822f","Type":"ContainerStarted","Data":"7de758dfbb7505e502dc1e463c5b187ac0831fdc98ea780537a21ae301768600"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.473237 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f2q4h" podStartSLOduration=129.473214136 podStartE2EDuration="2m9.473214136s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.47300955 +0000 UTC m=+161.381728832" watchObservedRunningTime="2026-01-29 16:24:38.473214136 +0000 UTC m=+161.381933408" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.488779 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.489189 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.989144476 +0000 UTC m=+161.897863748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.490181 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.491975 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:38.991956919 +0000 UTC m=+161.900676191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.493859 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" event={"ID":"af8ac9dd-fc42-4e30-b840-c7f5ad734bea","Type":"ContainerStarted","Data":"26a98a9361fb1d06f0d082043abaeb1e7f7613b8439b801940cbba59ccf5942d"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.494433 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" event={"ID":"af8ac9dd-fc42-4e30-b840-c7f5ad734bea","Type":"ContainerStarted","Data":"37b8b7465836bbe71cad443a8e7a22c46274645e27c73704c38cd87e459f04a2"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.520933 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z4r4v" podStartSLOduration=129.520909253 podStartE2EDuration="2m9.520909253s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.519885933 +0000 UTC m=+161.428605205" watchObservedRunningTime="2026-01-29 16:24:38.520909253 +0000 UTC m=+161.429628525" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.529312 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" event={"ID":"4d5118e4-db44-4e09-a04d-2036e251936b","Type":"ContainerStarted","Data":"074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.530302 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.535492 4886 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4rg2h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.535548 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" podUID="4d5118e4-db44-4e09-a04d-2036e251936b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.538009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" event={"ID":"43f861c0-d4a2-449e-b322-b92097bc56aa","Type":"ContainerStarted","Data":"ebb15fba7f7141fec275e41f9fee9df7b45607819f8e6aac94ef98fed30228a9"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.538072 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" event={"ID":"43f861c0-d4a2-449e-b322-b92097bc56aa","Type":"ContainerStarted","Data":"8ba454760d6f0d5b141edfaee1168ffd6b2f0a5ce9da70b5bded2603db2659c7"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.548763 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" event={"ID":"17accc89-e860-4b12-b5b3-3da7adaa3430","Type":"ContainerStarted","Data":"fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.548813 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" event={"ID":"17accc89-e860-4b12-b5b3-3da7adaa3430","Type":"ContainerStarted","Data":"496e5ab4c79c2396e707c4fc94a4d2815e8f1572d6df45519acda3977888c122"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.549955 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.551648 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w8bm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.551694 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.564749 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" event={"ID":"99f63064-683c-4132-83b3-53480c64f426","Type":"ContainerStarted","Data":"52ec7cf2c1345089e47697132eaebcabf5577a03c53b2ad92879c03ca4141536"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.569283 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fkbjz" podStartSLOduration=130.56926127 podStartE2EDuration="2m10.56926127s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.561956444 +0000 UTC m=+161.470675716" watchObservedRunningTime="2026-01-29 16:24:38.56926127 +0000 UTC m=+161.477980542" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.577986 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" event={"ID":"bc749cf3-40b6-4957-ac19-a5d6db460e00","Type":"ContainerStarted","Data":"1dd464d234419c42c2d090ab20b6a6e29f38e8e20f1761d88840134d3d95f385"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.578038 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" event={"ID":"bc749cf3-40b6-4957-ac19-a5d6db460e00","Type":"ContainerStarted","Data":"d809320ebf4a4764dcce4b83a403ff38990bfbd0f4df7657ad63cac0c0ff922a"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.596946 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.597804 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.097369909 +0000 UTC m=+162.006089191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.599233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" event={"ID":"793b5b1f-d882-4f05-be9f-7515433a91e7","Type":"ContainerStarted","Data":"03d8c2a16387faad8ae373bd4fd7ff9d429cfb14ae10a25170d1b15f88286a7e"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.609948 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" event={"ID":"009f91e7-865b-400a-a879-4985c84b321c","Type":"ContainerStarted","Data":"815e31b9cb175219ce441d9c8d8a7201be24b5dd17339fc697c192da774ebddf"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.623148 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" podStartSLOduration=129.623125069 podStartE2EDuration="2m9.623125069s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.596136132 +0000 UTC m=+161.504855414" watchObservedRunningTime="2026-01-29 16:24:38.623125069 +0000 UTC m=+161.531844341" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.623253 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" podStartSLOduration=129.623247862 podStartE2EDuration="2m9.623247862s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.622218572 +0000 UTC m=+161.530937844" watchObservedRunningTime="2026-01-29 16:24:38.623247862 +0000 UTC m=+161.531967134" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.624397 4886 generic.go:334] "Generic (PLEG): container finished" podID="7c5463e2-9818-4a5e-8dd0-36cd4c78d749" containerID="d39896999d683b47a472df50f6eb87bd8354d9c7284ffe8fe0a11a2959b8618f" exitCode=0 Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.625081 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" event={"ID":"7c5463e2-9818-4a5e-8dd0-36cd4c78d749","Type":"ContainerDied","Data":"d39896999d683b47a472df50f6eb87bd8354d9c7284ffe8fe0a11a2959b8618f"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.642427 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plhr2" podStartSLOduration=130.642176131 podStartE2EDuration="2m10.642176131s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.641657145 +0000 UTC m=+161.550376417" watchObservedRunningTime="2026-01-29 16:24:38.642176131 +0000 UTC m=+161.550895413" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.651455 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" event={"ID":"05a6a15e-b8e2-42b8-8e24-f891f348a835","Type":"ContainerStarted","Data":"c4444b57cee8001cb249e2badc8a5a9179bf75fc0ee157df68671556a08a0d24"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.657945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zrg4t" event={"ID":"c5c84483-6cc1-4f51-86e1-330250fcb1d0","Type":"ContainerStarted","Data":"6d075dbded8ab4ce6555cf05af3d8fea68c599ec8c4c80779abfbb08739f70f9"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.667030 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-blldt" podStartSLOduration=130.667010383 podStartE2EDuration="2m10.667010383s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.666436106 +0000 UTC m=+161.575155398" watchObservedRunningTime="2026-01-29 16:24:38.667010383 +0000 UTC m=+161.575729655" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.667314 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" event={"ID":"0af2647d-2354-4929-914e-623c44c12232","Type":"ContainerStarted","Data":"36f25dff023cb1d70a79cf38d6ba870eecef7e87c5cba93137f46463eb0b1f59"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.667390 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" event={"ID":"0af2647d-2354-4929-914e-623c44c12232","Type":"ContainerStarted","Data":"dbb0a6bcde8023165d404ab0e594aeb554fbd7ad40722e526dbf6f133c817a6b"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.667700 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.692360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wczvq" event={"ID":"d677ab93-2fac-4612-8558-8ffc559d5247","Type":"ContainerStarted","Data":"feeb1501e174ae22fe9bb3f57adbc069ea63e36086e0d07ec76fe67116a0a40f"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.693480 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.693513 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.700185 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.704484 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" event={"ID":"204067d9-20d8-440f-88f4-57b6ce3a0ef1","Type":"ContainerStarted","Data":"b9cd761267b8a75f27b833765fff189e15b14fb870df6bb45dd95cba836fe8b5"} Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.705976 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.205961212 +0000 UTC m=+162.114680484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.735981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" event={"ID":"eb068b0a-4b6b-48b7-bae4-ab193394f299","Type":"ContainerStarted","Data":"bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.737009 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.740855 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" podStartSLOduration=130.740835031 podStartE2EDuration="2m10.740835031s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.740551853 +0000 UTC m=+161.649271125" watchObservedRunningTime="2026-01-29 16:24:38.740835031 +0000 UTC m=+161.649554303" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.741676 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" podStartSLOduration=130.741669546 podStartE2EDuration="2m10.741669546s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.70010611 +0000 UTC m=+161.608825382" watchObservedRunningTime="2026-01-29 16:24:38.741669546 +0000 UTC m=+161.650388818" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.747415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" event={"ID":"e3790628-7588-42bf-ace6-04e2a0f1a09a","Type":"ContainerStarted","Data":"bd1fefc4b449625e3d98e8ec1ef3d21f6721aa72c2b82fad424c6188af276166"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.755542 4886 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-h57m9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.755601 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" podUID="eb068b0a-4b6b-48b7-bae4-ab193394f299" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.769289 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" event={"ID":"1d35f633-a6e9-4890-8c3f-ec87291ac03f","Type":"ContainerStarted","Data":"85fc74c67762fa998909efac146d0c8f2028093214134accbcd038b924c05d95"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.769676 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" event={"ID":"1d35f633-a6e9-4890-8c3f-ec87291ac03f","Type":"ContainerStarted","Data":"d2096978e2e1f1b99c5b9f22564409592e3177b3520a54f540b9113fbfdcf10b"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.774624 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjw5r" podStartSLOduration=129.774599957 podStartE2EDuration="2m9.774599957s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.767981492 +0000 UTC m=+161.676700764" watchObservedRunningTime="2026-01-29 16:24:38.774599957 +0000 UTC m=+161.683319239" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.781727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" event={"ID":"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8","Type":"ContainerStarted","Data":"981a9bbe7d4a9592e00a8f5dc7954086578b35091bd3178ab415313374bbd932"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.781780 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" event={"ID":"b7f6ff84-95a3-4119-b688-1d28cc3fc4b8","Type":"ContainerStarted","Data":"628ee9b87d4148b664711b85c520c37ec3e083e890e51a7c7027696496208b73"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.782026 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.794455 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-24n77 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.794509 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" podUID="b7f6ff84-95a3-4119-b688-1d28cc3fc4b8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.796783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" event={"ID":"79844037-42b5-456b-acbd-45fc61f251d9","Type":"ContainerStarted","Data":"f0e264215492d792aab963d1f450af95d59e745c857f963c3586ffdaa9960544"} Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.796814 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.805438 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-bj8hg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.805483 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" podUID="79844037-42b5-456b-acbd-45fc61f251d9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.810479 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.811951 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.311929488 +0000 UTC m=+162.220648810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.821258 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" podStartSLOduration=129.821240043 podStartE2EDuration="2m9.821240043s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.814042511 +0000 UTC m=+161.722761793" watchObservedRunningTime="2026-01-29 16:24:38.821240043 +0000 UTC m=+161.729959305" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.846415 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l5v6d" podStartSLOduration=129.846398725 podStartE2EDuration="2m9.846398725s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.845688444 +0000 UTC m=+161.754407716" watchObservedRunningTime="2026-01-29 16:24:38.846398725 +0000 UTC m=+161.755117997" Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.917480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:38 crc kubenswrapper[4886]: E0129 16:24:38.918721 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.418706578 +0000 UTC m=+162.327425850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:38 crc kubenswrapper[4886]: I0129 16:24:38.942709 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" podStartSLOduration=129.942689116 podStartE2EDuration="2m9.942689116s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.93098004 +0000 UTC m=+161.839699332" watchObservedRunningTime="2026-01-29 16:24:38.942689116 +0000 UTC m=+161.851408388" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.018624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.018980 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.518961896 +0000 UTC m=+162.427681168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.041146 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zjtrn" podStartSLOduration=131.04111743 podStartE2EDuration="2m11.04111743s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:38.984015355 +0000 UTC m=+161.892734627" watchObservedRunningTime="2026-01-29 16:24:39.04111743 +0000 UTC m=+161.949836702" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.078953 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zrg4t" podStartSLOduration=131.078934235 podStartE2EDuration="2m11.078934235s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.039870933 +0000 UTC m=+161.948590235" watchObservedRunningTime="2026-01-29 16:24:39.078934235 +0000 UTC m=+161.987653507" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.079557 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-spj4x" podStartSLOduration=131.079550623 podStartE2EDuration="2m11.079550623s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.077509613 +0000 UTC m=+161.986228885" watchObservedRunningTime="2026-01-29 16:24:39.079550623 +0000 UTC m=+161.988269895" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.103701 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z5kbx" podStartSLOduration=131.103685925 podStartE2EDuration="2m11.103685925s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.100769209 +0000 UTC m=+162.009488491" watchObservedRunningTime="2026-01-29 16:24:39.103685925 +0000 UTC m=+162.012405197" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.120908 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.121213 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.621201152 +0000 UTC m=+162.529920424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.210476 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" podStartSLOduration=131.210454965 podStartE2EDuration="2m11.210454965s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.208190448 +0000 UTC m=+162.116909720" watchObservedRunningTime="2026-01-29 16:24:39.210454965 +0000 UTC m=+162.119174237" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.221763 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.222149 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.72213132 +0000 UTC m=+162.630850592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.242268 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" podStartSLOduration=131.242238103 podStartE2EDuration="2m11.242238103s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.240890713 +0000 UTC m=+162.149609985" watchObservedRunningTime="2026-01-29 16:24:39.242238103 +0000 UTC m=+162.150957375" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.275589 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" podStartSLOduration=130.275570286 podStartE2EDuration="2m10.275570286s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.274932097 +0000 UTC m=+162.183651369" watchObservedRunningTime="2026-01-29 16:24:39.275570286 +0000 UTC m=+162.184289558" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.323894 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.324276 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.824260372 +0000 UTC m=+162.732979644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.406046 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.413661 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:39 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:39 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:39 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.413732 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.424836 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.424989 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.924953593 +0000 UTC m=+162.833672865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.425445 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.425820 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:39.925810048 +0000 UTC m=+162.834529330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.526053 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.526180 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.026156228 +0000 UTC m=+162.934875500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.526526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.526832 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.026822058 +0000 UTC m=+162.935541330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.567159 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.567207 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.568823 4886 patch_prober.go:28] interesting pod/apiserver-76f77b778f-v5s4w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.568933 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" podUID="1d35f633-a6e9-4890-8c3f-ec87291ac03f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.627988 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.628152 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.128132337 +0000 UTC m=+163.036851609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.628560 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.628913 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.128899429 +0000 UTC m=+163.037618701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.729177 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.729376 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.229350783 +0000 UTC m=+163.138070055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.729891 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.730193 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.230181517 +0000 UTC m=+163.138900789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.800278 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" event={"ID":"17aa0fcf-9538-4649-b9c8-0fdd6469c8da","Type":"ContainerStarted","Data":"0afe50e191b4c9d98601b3fdd46b249c3ba0170b382f0f6c689bb26ef30c1cf6"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.802024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m2x88" event={"ID":"99f63064-683c-4132-83b3-53480c64f426","Type":"ContainerStarted","Data":"71214323d6044f965b335597096d3e832fb47734f0936cd4440c17ea5866a04a"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.803373 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" event={"ID":"d42a606f-2b2f-4782-ba98-15d8662eb3a9","Type":"ContainerStarted","Data":"ea11fa7dacbb58f3e3576e241752dadfb8d6fe95962bd53b00687b73790b1a31"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.804707 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" event={"ID":"0af2647d-2354-4929-914e-623c44c12232","Type":"ContainerStarted","Data":"4180e4345774663aced13be0ce1cba4a111dfa59478fc1936cb0b6cd99bfe023"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.806405 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" event={"ID":"7c5463e2-9818-4a5e-8dd0-36cd4c78d749","Type":"ContainerStarted","Data":"150ccb56ad144089ee9a6ee54cd337550d8ecafcbbcb4e0359a39e643177b406"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.807298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" event={"ID":"05a6a15e-b8e2-42b8-8e24-f891f348a835","Type":"ContainerStarted","Data":"67cb587d47dc2c0fea71d2ca211d9f9ee8947591c33a8ba8170ec38fa0c5bcfd"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.809127 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" event={"ID":"bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc","Type":"ContainerStarted","Data":"e740014567ae47879287e08cfc1d9c41d538c63bab2990eaea6d92f6bc770838"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.812948 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-76mxm" event={"ID":"66990908-26a0-4a12-a85b-304c4ed052a9","Type":"ContainerStarted","Data":"9a48d43cd1a8f0e7c620b6310a82d98c024b48c849ab798f118379045b52155a"} Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.813609 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ssftv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.813643 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" podUID="14647a71-8c69-4ae7-919a-fe0ef1684c1f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.813662 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w8bm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.813663 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.813694 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.813709 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.814416 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-24n77 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.814442 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" podUID="b7f6ff84-95a3-4119-b688-1d28cc3fc4b8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.816936 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.820206 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.823538 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.827643 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8qsrq" podStartSLOduration=130.827627442 podStartE2EDuration="2m10.827627442s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.827126567 +0000 UTC m=+162.735845839" watchObservedRunningTime="2026-01-29 16:24:39.827627442 +0000 UTC m=+162.736346724" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.851229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.851789 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.351762094 +0000 UTC m=+163.260481366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.852113 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p42xx" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.951367 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-76mxm" podStartSLOduration=7.951351722 podStartE2EDuration="7.951351722s" podCreationTimestamp="2026-01-29 16:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.951065173 +0000 UTC m=+162.859784455" watchObservedRunningTime="2026-01-29 16:24:39.951351722 +0000 UTC m=+162.860070994" Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.953188 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:39 crc kubenswrapper[4886]: E0129 16:24:39.956865 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.456852884 +0000 UTC m=+163.365572156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:39 crc kubenswrapper[4886]: I0129 16:24:39.976370 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" podStartSLOduration=130.976306938 podStartE2EDuration="2m10.976306938s" podCreationTimestamp="2026-01-29 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:39.974863765 +0000 UTC m=+162.883583047" watchObservedRunningTime="2026-01-29 16:24:39.976306938 +0000 UTC m=+162.885026210" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.055134 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.055434 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.555420092 +0000 UTC m=+163.464139364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.138633 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" podStartSLOduration=132.138612456 podStartE2EDuration="2m12.138612456s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:40.08890938 +0000 UTC m=+162.997628652" watchObservedRunningTime="2026-01-29 16:24:40.138612456 +0000 UTC m=+163.047331718" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.141182 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bj8hg" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.156479 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.156816 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.656804673 +0000 UTC m=+163.565523945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.211852 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-x62jn" podStartSLOduration=132.211831516 podStartE2EDuration="2m12.211831516s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:40.139446251 +0000 UTC m=+163.048165523" watchObservedRunningTime="2026-01-29 16:24:40.211831516 +0000 UTC m=+163.120550788" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.257885 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.258213 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.758198164 +0000 UTC m=+163.666917436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.360522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.360988 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.860969096 +0000 UTC m=+163.769688368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.409688 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:40 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:40 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:40 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.409753 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.461490 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.461743 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.961697927 +0000 UTC m=+163.870417199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.462046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.462399 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:40.962384807 +0000 UTC m=+163.871104079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.562912 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.563183 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.06314561 +0000 UTC m=+163.971864882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.563434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.563820 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.063805829 +0000 UTC m=+163.972525101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.664507 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.664704 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.164675235 +0000 UTC m=+164.073394507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.664864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.665181 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.16517019 +0000 UTC m=+164.073889462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.766560 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.766815 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.266774827 +0000 UTC m=+164.175494099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.767043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.767374 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.267360234 +0000 UTC m=+164.176079506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.821729 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.822302 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w8bm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.822374 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.867978 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.868079 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.368062525 +0000 UTC m=+164.276781797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.868122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.868403 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.368394235 +0000 UTC m=+164.277113497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.969468 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.969594 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.469578309 +0000 UTC m=+164.378297581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:40 crc kubenswrapper[4886]: I0129 16:24:40.969731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:40 crc kubenswrapper[4886]: E0129 16:24:40.973635 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.473618298 +0000 UTC m=+164.382337570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.071629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.071741 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.571722962 +0000 UTC m=+164.480442234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.071975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.072225 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.572217987 +0000 UTC m=+164.480937259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.173254 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.173451 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.673425433 +0000 UTC m=+164.582144705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.173520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.173788 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.673780123 +0000 UTC m=+164.582499395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.274994 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.275226 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.775169994 +0000 UTC m=+164.683889266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.275279 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.275898 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.775888665 +0000 UTC m=+164.684607937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.376224 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.376620 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.876601486 +0000 UTC m=+164.785320758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.413062 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:41 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:41 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:41 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.413128 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.477891 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.478280 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:41.978265685 +0000 UTC m=+164.886984957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.579419 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.579687 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.079656666 +0000 UTC m=+164.988375958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.579906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.580229 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.080216603 +0000 UTC m=+164.988935875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.681150 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.681616 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.181596054 +0000 UTC m=+165.090315326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.777237 4886 csr.go:261] certificate signing request csr-2nffz is approved, waiting to be issued Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.783238 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.783788 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.283731166 +0000 UTC m=+165.192450438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.789712 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ssftv" Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.804598 4886 csr.go:257] certificate signing request csr-2nffz is issued Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.848205 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" event={"ID":"05a6a15e-b8e2-42b8-8e24-f891f348a835","Type":"ContainerStarted","Data":"e2ebfd03b1acfbb78cd4318c57ee8f6f11af8f9139a2ae4a58e557a592d4a092"} Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.848254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" event={"ID":"05a6a15e-b8e2-42b8-8e24-f891f348a835","Type":"ContainerStarted","Data":"724e2a99ef125052b0f2bff0240c4b8f1f700aa97e8ce8ebd962d35e65f56722"} Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.884788 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.884958 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.384931122 +0000 UTC m=+165.293650394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.885039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.885372 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.385361445 +0000 UTC m=+165.294080797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.985566 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.985790 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.485755936 +0000 UTC m=+165.394475208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:41 crc kubenswrapper[4886]: I0129 16:24:41.985834 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:41 crc kubenswrapper[4886]: E0129 16:24:41.986666 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.486648683 +0000 UTC m=+165.395367955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.086604 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.087044 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.587015774 +0000 UTC m=+165.495735046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.187835 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.188351 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.688313542 +0000 UTC m=+165.597032814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.288656 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.289019 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.789005292 +0000 UTC m=+165.697724564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.315227 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cj9vs"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.316127 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.323679 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.337823 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cj9vs"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.389858 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.390201 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.890188087 +0000 UTC m=+165.798907359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.410714 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:42 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:42 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:42 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.410779 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.490464 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcj6l"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.490634 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.490761 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gjgt\" (UniqueName: \"kubernetes.io/projected/434ccaea-8a30-4a97-8908-64bc9f550de0-kube-api-access-4gjgt\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.490825 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.990811426 +0000 UTC m=+165.899530688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.490860 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-utilities\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.491006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.491085 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-catalog-content\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.491376 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:42.991359872 +0000 UTC m=+165.900079144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.491535 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.493995 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.545464 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcj6l"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.554377 4886 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.592624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.592878 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gjgt\" (UniqueName: \"kubernetes.io/projected/434ccaea-8a30-4a97-8908-64bc9f550de0-kube-api-access-4gjgt\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.592914 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-catalog-content\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.592936 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-utilities\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.592978 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-utilities\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.593000 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6qn\" (UniqueName: \"kubernetes.io/projected/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-kube-api-access-xn6qn\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.593024 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-catalog-content\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.593448 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-catalog-content\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.593512 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:43.093497895 +0000 UTC m=+166.002217167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.594011 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-utilities\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.618735 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gjgt\" (UniqueName: \"kubernetes.io/projected/434ccaea-8a30-4a97-8908-64bc9f550de0-kube-api-access-4gjgt\") pod \"community-operators-cj9vs\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.629022 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.694171 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.694240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-utilities\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.694267 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6qn\" (UniqueName: \"kubernetes.io/projected/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-kube-api-access-xn6qn\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.694377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-catalog-content\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.694859 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-catalog-content\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.694905 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-psrrq"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.696019 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.702026 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:24:43.202009006 +0000 UTC m=+166.110728278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-44l86" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.702543 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-utilities\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.710301 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psrrq"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.752156 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6qn\" (UniqueName: \"kubernetes.io/projected/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-kube-api-access-xn6qn\") pod \"certified-operators-xcj6l\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.795618 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.795870 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-catalog-content\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.795963 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-utilities\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.796026 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68vd\" (UniqueName: \"kubernetes.io/projected/9a50cf2f-b08d-4f5c-a364-d939d83aa205-kube-api-access-g68vd\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: E0129 16:24:42.796166 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:24:43.296151383 +0000 UTC m=+166.204870655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.804540 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.806169 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 16:19:41 +0000 UTC, rotation deadline is 2026-12-15 07:13:29.122304294 +0000 UTC Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.806195 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7670h48m46.316112634s for next certificate rotation Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.824704 4886 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T16:24:42.554397932Z","Handler":null,"Name":""} Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.845855 4886 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.845901 4886 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.848492 4886 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bxbsl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.848630 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" podUID="bffd7e9c-5274-4e27-b5d9-7e23ae3cbfbc" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.895078 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" event={"ID":"05a6a15e-b8e2-42b8-8e24-f891f348a835","Type":"ContainerStarted","Data":"6ef7495c46a61209631d38e499b780e0acf80c55600e1d609062fdd2fb82bb0b"} Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.896662 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-utilities\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.896735 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.896762 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g68vd\" (UniqueName: \"kubernetes.io/projected/9a50cf2f-b08d-4f5c-a364-d939d83aa205-kube-api-access-g68vd\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.896805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-catalog-content\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.897311 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-catalog-content\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.897640 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-utilities\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.909570 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qjqm7"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.911079 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.915964 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.916031 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.931810 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g68vd\" (UniqueName: \"kubernetes.io/projected/9a50cf2f-b08d-4f5c-a364-d939d83aa205-kube-api-access-g68vd\") pod \"community-operators-psrrq\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.945458 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjqm7"] Jan 29 16:24:42 crc kubenswrapper[4886]: I0129 16:24:42.949057 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jfbvx" podStartSLOduration=10.949035343 podStartE2EDuration="10.949035343s" podCreationTimestamp="2026-01-29 16:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:42.947397205 +0000 UTC m=+165.856116477" watchObservedRunningTime="2026-01-29 16:24:42.949035343 +0000 UTC m=+165.857754616" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.016767 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cj9vs"] Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.026473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-44l86\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.030090 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.049542 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.050251 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.063044 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.063346 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.075882 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.102546 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.102792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs29d\" (UniqueName: \"kubernetes.io/projected/057806c7-b5ca-43df-91c7-30a2dc58c011-kube-api-access-gs29d\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.102848 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-utilities\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.102866 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-catalog-content\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.109938 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.187688 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.204238 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs29d\" (UniqueName: \"kubernetes.io/projected/057806c7-b5ca-43df-91c7-30a2dc58c011-kube-api-access-gs29d\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.204306 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3ca4de0-b24b-4085-88a1-80679f676a50-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.204352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-utilities\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.204373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-catalog-content\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.204394 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3ca4de0-b24b-4085-88a1-80679f676a50-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.210228 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-utilities\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.210474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-catalog-content\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.226248 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs29d\" (UniqueName: \"kubernetes.io/projected/057806c7-b5ca-43df-91c7-30a2dc58c011-kube-api-access-gs29d\") pod \"certified-operators-qjqm7\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.267971 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.310146 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3ca4de0-b24b-4085-88a1-80679f676a50-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.310610 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcj6l"] Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.310625 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3ca4de0-b24b-4085-88a1-80679f676a50-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.310668 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3ca4de0-b24b-4085-88a1-80679f676a50-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.351355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3ca4de0-b24b-4085-88a1-80679f676a50-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.397915 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.409550 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:43 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:43 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:43 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.409597 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.445207 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44l86"] Jan 29 16:24:43 crc kubenswrapper[4886]: W0129 16:24:43.464801 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d6caa5_f77a_4acf_a631_0c3abb84959c.slice/crio-a00a9bdfeb0d8ca50bb13348e56690ba099ee336a61298251b903a6dea3d27eb WatchSource:0}: Error finding container a00a9bdfeb0d8ca50bb13348e56690ba099ee336a61298251b903a6dea3d27eb: Status 404 returned error can't find the container with id a00a9bdfeb0d8ca50bb13348e56690ba099ee336a61298251b903a6dea3d27eb Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.507527 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjqm7"] Jan 29 16:24:43 crc kubenswrapper[4886]: W0129 16:24:43.560246 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod057806c7_b5ca_43df_91c7_30a2dc58c011.slice/crio-b80b8058bdb8fd4eef83ffeccee0a93733e929325e740b25b1e55fdba478cf66 WatchSource:0}: Error finding container b80b8058bdb8fd4eef83ffeccee0a93733e929325e740b25b1e55fdba478cf66: Status 404 returned error can't find the container with id b80b8058bdb8fd4eef83ffeccee0a93733e929325e740b25b1e55fdba478cf66 Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.578222 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psrrq"] Jan 29 16:24:43 crc kubenswrapper[4886]: W0129 16:24:43.593156 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a50cf2f_b08d_4f5c_a364_d939d83aa205.slice/crio-977a500ff43da21b72edc2242140ccdd69d26da152fa09c76f29609579032cbf WatchSource:0}: Error finding container 977a500ff43da21b72edc2242140ccdd69d26da152fa09c76f29609579032cbf: Status 404 returned error can't find the container with id 977a500ff43da21b72edc2242140ccdd69d26da152fa09c76f29609579032cbf Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.628114 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:24:43 crc kubenswrapper[4886]: W0129 16:24:43.638211 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda3ca4de0_b24b_4085_88a1_80679f676a50.slice/crio-0b0e5ae92c8d315b010eabd06437a23027f9b655cc32621f0cf92d9da8c8a95d WatchSource:0}: Error finding container 0b0e5ae92c8d315b010eabd06437a23027f9b655cc32621f0cf92d9da8c8a95d: Status 404 returned error can't find the container with id 0b0e5ae92c8d315b010eabd06437a23027f9b655cc32621f0cf92d9da8c8a95d Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.909732 4886 generic.go:334] "Generic (PLEG): container finished" podID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerID="f96346bc3cddc5b5f42583c8eb8f6cc35656bf523771e55b7bf0bb6b9c122669" exitCode=0 Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.910134 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psrrq" event={"ID":"9a50cf2f-b08d-4f5c-a364-d939d83aa205","Type":"ContainerDied","Data":"f96346bc3cddc5b5f42583c8eb8f6cc35656bf523771e55b7bf0bb6b9c122669"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.910170 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psrrq" event={"ID":"9a50cf2f-b08d-4f5c-a364-d939d83aa205","Type":"ContainerStarted","Data":"977a500ff43da21b72edc2242140ccdd69d26da152fa09c76f29609579032cbf"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.913234 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.916259 4886 generic.go:334] "Generic (PLEG): container finished" podID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerID="9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91" exitCode=0 Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.916348 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerDied","Data":"9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.916382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerStarted","Data":"c930283727a8af009300e17c576da570a17d69226a2431e0b8f6442ab7a33682"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.922025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3ca4de0-b24b-4085-88a1-80679f676a50","Type":"ContainerStarted","Data":"0b0e5ae92c8d315b010eabd06437a23027f9b655cc32621f0cf92d9da8c8a95d"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.926710 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" event={"ID":"b1d6caa5-f77a-4acf-a631-0c3abb84959c","Type":"ContainerStarted","Data":"deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.926746 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" event={"ID":"b1d6caa5-f77a-4acf-a631-0c3abb84959c","Type":"ContainerStarted","Data":"a00a9bdfeb0d8ca50bb13348e56690ba099ee336a61298251b903a6dea3d27eb"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.926762 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.929680 4886 generic.go:334] "Generic (PLEG): container finished" podID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerID="587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5" exitCode=0 Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.929743 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerDied","Data":"587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.929774 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerStarted","Data":"b49a4641d27203a40e0f7e4f28f82c1063741221c6c208a86d4e1a5bc30f7000"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.935712 4886 generic.go:334] "Generic (PLEG): container finished" podID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerID="a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312" exitCode=0 Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.935780 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqm7" event={"ID":"057806c7-b5ca-43df-91c7-30a2dc58c011","Type":"ContainerDied","Data":"a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.935826 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqm7" event={"ID":"057806c7-b5ca-43df-91c7-30a2dc58c011","Type":"ContainerStarted","Data":"b80b8058bdb8fd4eef83ffeccee0a93733e929325e740b25b1e55fdba478cf66"} Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.969746 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" podStartSLOduration=135.968888229 podStartE2EDuration="2m15.968888229s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:43.968469797 +0000 UTC m=+166.877189099" watchObservedRunningTime="2026-01-29 16:24:43.968888229 +0000 UTC m=+166.877607501" Jan 29 16:24:43 crc kubenswrapper[4886]: I0129 16:24:43.976396 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bxbsl" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.409107 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:44 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:44 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:44 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.409172 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.491750 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xzc5s"] Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.492761 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.498908 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.520408 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzc5s"] Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.576114 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.589453 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v5s4w" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.622999 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.629422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-catalog-content\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.629487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncm2\" (UniqueName: \"kubernetes.io/projected/d8a07d27-67fb-47e8-9032-e4f831983d75-kube-api-access-xncm2\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.629549 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-utilities\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.730874 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-utilities\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.731003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-catalog-content\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.731048 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncm2\" (UniqueName: \"kubernetes.io/projected/d8a07d27-67fb-47e8-9032-e4f831983d75-kube-api-access-xncm2\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.731910 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-utilities\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.732340 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-catalog-content\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.749184 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.750888 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.755044 4886 patch_prober.go:28] interesting pod/console-f9d7485db-frztl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.755091 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-frztl" podUID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.755688 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncm2\" (UniqueName: \"kubernetes.io/projected/d8a07d27-67fb-47e8-9032-e4f831983d75-kube-api-access-xncm2\") pod \"redhat-marketplace-xzc5s\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.792123 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.792752 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.799863 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.821505 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.894567 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zs9nq"] Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.895806 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.911620 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zs9nq"] Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.934498 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.934573 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.942018 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.942059 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.972383 4886 generic.go:334] "Generic (PLEG): container finished" podID="a3ca4de0-b24b-4085-88a1-80679f676a50" containerID="dff6261a9d09217074b13ba2c40cb42ae9ab77225a0e738cdb2ee44ecf7171fa" exitCode=0 Jan 29 16:24:44 crc kubenswrapper[4886]: I0129 16:24:44.979742 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3ca4de0-b24b-4085-88a1-80679f676a50","Type":"ContainerDied","Data":"dff6261a9d09217074b13ba2c40cb42ae9ab77225a0e738cdb2ee44ecf7171fa"} Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:44.999259 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwmkt" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.045915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zvz\" (UniqueName: \"kubernetes.io/projected/dd20d05f-cd0f-401e-b18a-2f89354792d0-kube-api-access-96zvz\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.045967 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-utilities\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.045998 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-catalog-content\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.147562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-utilities\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.147693 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-catalog-content\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.147895 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96zvz\" (UniqueName: \"kubernetes.io/projected/dd20d05f-cd0f-401e-b18a-2f89354792d0-kube-api-access-96zvz\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.150000 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-utilities\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.150899 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-catalog-content\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.211280 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96zvz\" (UniqueName: \"kubernetes.io/projected/dd20d05f-cd0f-401e-b18a-2f89354792d0-kube-api-access-96zvz\") pod \"redhat-marketplace-zs9nq\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.225579 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.244742 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzc5s"] Jan 29 16:24:45 crc kubenswrapper[4886]: W0129 16:24:45.322453 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8a07d27_67fb_47e8_9032_e4f831983d75.slice/crio-8df354200569f756ef71068446371a43cfad097210faf33ea3e2d3966f2eb917 WatchSource:0}: Error finding container 8df354200569f756ef71068446371a43cfad097210faf33ea3e2d3966f2eb917: Status 404 returned error can't find the container with id 8df354200569f756ef71068446371a43cfad097210faf33ea3e2d3966f2eb917 Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.405726 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.408793 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:45 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:45 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:45 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.408860 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.500025 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6hph6"] Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.501310 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.504262 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.532140 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hph6"] Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.554391 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.663133 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-catalog-content\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.663188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-utilities\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.663216 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8xv\" (UniqueName: \"kubernetes.io/projected/c36e6697-37b9-4b10-baea-0f9c92014c79-kube-api-access-qf8xv\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.764307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-catalog-content\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.764821 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-utilities\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.764845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8xv\" (UniqueName: \"kubernetes.io/projected/c36e6697-37b9-4b10-baea-0f9c92014c79-kube-api-access-qf8xv\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.765314 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-utilities\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.765349 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-catalog-content\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.765595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-24n77" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.791373 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8xv\" (UniqueName: \"kubernetes.io/projected/c36e6697-37b9-4b10-baea-0f9c92014c79-kube-api-access-qf8xv\") pod \"redhat-operators-6hph6\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.831869 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zs9nq"] Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.834993 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.891311 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4jbxl"] Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.892320 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.906273 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jbxl"] Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.971216 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-catalog-content\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.971345 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-utilities\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:45 crc kubenswrapper[4886]: I0129 16:24:45.971370 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvmx\" (UniqueName: \"kubernetes.io/projected/a710476e-74f4-4f7e-ab94-d2428bade61e-kube-api-access-kkvmx\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.057941 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zs9nq" event={"ID":"dd20d05f-cd0f-401e-b18a-2f89354792d0","Type":"ContainerStarted","Data":"3a14ec6fcf7e574cbb7bb1e550a27abeaf3193fe3131800ddd76cb089990f9d3"} Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.066169 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerID="3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd" exitCode=0 Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.067566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzc5s" event={"ID":"d8a07d27-67fb-47e8-9032-e4f831983d75","Type":"ContainerDied","Data":"3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd"} Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.067612 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzc5s" event={"ID":"d8a07d27-67fb-47e8-9032-e4f831983d75","Type":"ContainerStarted","Data":"8df354200569f756ef71068446371a43cfad097210faf33ea3e2d3966f2eb917"} Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.075051 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-utilities\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.075094 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkvmx\" (UniqueName: \"kubernetes.io/projected/a710476e-74f4-4f7e-ab94-d2428bade61e-kube-api-access-kkvmx\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.075116 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-catalog-content\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.075929 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-catalog-content\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.075991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-utilities\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.104573 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkvmx\" (UniqueName: \"kubernetes.io/projected/a710476e-74f4-4f7e-ab94-d2428bade61e-kube-api-access-kkvmx\") pod \"redhat-operators-4jbxl\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.167240 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hph6"] Jan 29 16:24:46 crc kubenswrapper[4886]: W0129 16:24:46.251577 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc36e6697_37b9_4b10_baea_0f9c92014c79.slice/crio-2597500a6782cab3fff1d1bf05e088755f933968f6726da1d1dcae802c73e7f3 WatchSource:0}: Error finding container 2597500a6782cab3fff1d1bf05e088755f933968f6726da1d1dcae802c73e7f3: Status 404 returned error can't find the container with id 2597500a6782cab3fff1d1bf05e088755f933968f6726da1d1dcae802c73e7f3 Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.251733 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.410857 4886 patch_prober.go:28] interesting pod/router-default-5444994796-zrg4t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:24:46 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Jan 29 16:24:46 crc kubenswrapper[4886]: [+]process-running ok Jan 29 16:24:46 crc kubenswrapper[4886]: healthz check failed Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.410906 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zrg4t" podUID="c5c84483-6cc1-4f51-86e1-330250fcb1d0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.558888 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.709791 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3ca4de0-b24b-4085-88a1-80679f676a50-kubelet-dir\") pod \"a3ca4de0-b24b-4085-88a1-80679f676a50\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.709849 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3ca4de0-b24b-4085-88a1-80679f676a50-kube-api-access\") pod \"a3ca4de0-b24b-4085-88a1-80679f676a50\" (UID: \"a3ca4de0-b24b-4085-88a1-80679f676a50\") " Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.711018 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ca4de0-b24b-4085-88a1-80679f676a50-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3ca4de0-b24b-4085-88a1-80679f676a50" (UID: "a3ca4de0-b24b-4085-88a1-80679f676a50"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.720816 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ca4de0-b24b-4085-88a1-80679f676a50-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3ca4de0-b24b-4085-88a1-80679f676a50" (UID: "a3ca4de0-b24b-4085-88a1-80679f676a50"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.811057 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3ca4de0-b24b-4085-88a1-80679f676a50-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.811097 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3ca4de0-b24b-4085-88a1-80679f676a50-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.887541 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:24:46 crc kubenswrapper[4886]: E0129 16:24:46.887962 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ca4de0-b24b-4085-88a1-80679f676a50" containerName="pruner" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.887981 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ca4de0-b24b-4085-88a1-80679f676a50" containerName="pruner" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.888143 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ca4de0-b24b-4085-88a1-80679f676a50" containerName="pruner" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.889873 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.894763 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.900820 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.900940 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:24:46 crc kubenswrapper[4886]: I0129 16:24:46.905595 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jbxl"] Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.014173 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c425266a-aaff-4684-a7f6-647dcd8073cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.014503 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c425266a-aaff-4684-a7f6-647dcd8073cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.076492 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerStarted","Data":"1b7c7ac95d6deb14d58d68d8614d14207966e7b0c294b7297faa9446ddd99953"} Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.082211 4886 generic.go:334] "Generic (PLEG): container finished" podID="a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" containerID="e24030b3765055e623ca669573f5fe2306c10abdab283e014f331f200998a684" exitCode=0 Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.082285 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" event={"ID":"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9","Type":"ContainerDied","Data":"e24030b3765055e623ca669573f5fe2306c10abdab283e014f331f200998a684"} Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.083649 4886 generic.go:334] "Generic (PLEG): container finished" podID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerID="0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17" exitCode=0 Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.083842 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerDied","Data":"0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17"} Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.083916 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerStarted","Data":"2597500a6782cab3fff1d1bf05e088755f933968f6726da1d1dcae802c73e7f3"} Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.089137 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zs9nq" event={"ID":"dd20d05f-cd0f-401e-b18a-2f89354792d0","Type":"ContainerDied","Data":"993aeae10b51b9ba867b7ad588cb7c6e7651b0c3345b073059af7a58ad9790c3"} Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.089078 4886 generic.go:334] "Generic (PLEG): container finished" podID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerID="993aeae10b51b9ba867b7ad588cb7c6e7651b0c3345b073059af7a58ad9790c3" exitCode=0 Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.096134 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3ca4de0-b24b-4085-88a1-80679f676a50","Type":"ContainerDied","Data":"0b0e5ae92c8d315b010eabd06437a23027f9b655cc32621f0cf92d9da8c8a95d"} Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.096199 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b0e5ae92c8d315b010eabd06437a23027f9b655cc32621f0cf92d9da8c8a95d" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.098502 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.122416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c425266a-aaff-4684-a7f6-647dcd8073cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.122505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c425266a-aaff-4684-a7f6-647dcd8073cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.122582 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c425266a-aaff-4684-a7f6-647dcd8073cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.162900 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c425266a-aaff-4684-a7f6-647dcd8073cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.257816 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.420640 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.431797 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zrg4t" Jan 29 16:24:47 crc kubenswrapper[4886]: I0129 16:24:47.638145 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:24:47 crc kubenswrapper[4886]: W0129 16:24:47.660048 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc425266a_aaff_4684_a7f6_647dcd8073cd.slice/crio-88393935927efd21cf9a8daa09c7d8e9e03573373366f935370e8ae9125e969b WatchSource:0}: Error finding container 88393935927efd21cf9a8daa09c7d8e9e03573373366f935370e8ae9125e969b: Status 404 returned error can't find the container with id 88393935927efd21cf9a8daa09c7d8e9e03573373366f935370e8ae9125e969b Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.123515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c425266a-aaff-4684-a7f6-647dcd8073cd","Type":"ContainerStarted","Data":"88393935927efd21cf9a8daa09c7d8e9e03573373366f935370e8ae9125e969b"} Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.134548 4886 generic.go:334] "Generic (PLEG): container finished" podID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerID="542d74b470422150123685d3edf24455da6a5470e04d40768b0ed7b1e8d27bc4" exitCode=0 Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.134844 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerDied","Data":"542d74b470422150123685d3edf24455da6a5470e04d40768b0ed7b1e8d27bc4"} Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.475777 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.654199 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52ttw\" (UniqueName: \"kubernetes.io/projected/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-kube-api-access-52ttw\") pod \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.654569 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-config-volume\") pod \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.654751 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-secret-volume\") pod \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\" (UID: \"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9\") " Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.656982 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" (UID: "a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.662409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-kube-api-access-52ttw" (OuterVolumeSpecName: "kube-api-access-52ttw") pod "a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" (UID: "a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9"). InnerVolumeSpecName "kube-api-access-52ttw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.666848 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" (UID: "a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.757778 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.757813 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52ttw\" (UniqueName: \"kubernetes.io/projected/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-kube-api-access-52ttw\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:48 crc kubenswrapper[4886]: I0129 16:24:48.757825 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:49 crc kubenswrapper[4886]: I0129 16:24:49.151177 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c425266a-aaff-4684-a7f6-647dcd8073cd","Type":"ContainerStarted","Data":"c342d6eaed9ce916ae25a84b3ff0e9628cbd0cfbf9832086c0422ff7d39b0b44"} Jan 29 16:24:49 crc kubenswrapper[4886]: I0129 16:24:49.160365 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" event={"ID":"a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9","Type":"ContainerDied","Data":"0d20bb5551ca7feda7d1ab34d809d68e52dc7cfd3aa9abdfcd5789f0817ad288"} Jan 29 16:24:49 crc kubenswrapper[4886]: I0129 16:24:49.160403 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d20bb5551ca7feda7d1ab34d809d68e52dc7cfd3aa9abdfcd5789f0817ad288" Jan 29 16:24:49 crc kubenswrapper[4886]: I0129 16:24:49.160477 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf" Jan 29 16:24:49 crc kubenswrapper[4886]: I0129 16:24:49.179132 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.179119162 podStartE2EDuration="3.179119162s" podCreationTimestamp="2026-01-29 16:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:49.17769801 +0000 UTC m=+172.086417282" watchObservedRunningTime="2026-01-29 16:24:49.179119162 +0000 UTC m=+172.087838434" Jan 29 16:24:50 crc kubenswrapper[4886]: I0129 16:24:50.172292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c425266a-aaff-4684-a7f6-647dcd8073cd","Type":"ContainerDied","Data":"c342d6eaed9ce916ae25a84b3ff0e9628cbd0cfbf9832086c0422ff7d39b0b44"} Jan 29 16:24:50 crc kubenswrapper[4886]: I0129 16:24:50.172522 4886 generic.go:334] "Generic (PLEG): container finished" podID="c425266a-aaff-4684-a7f6-647dcd8073cd" containerID="c342d6eaed9ce916ae25a84b3ff0e9628cbd0cfbf9832086c0422ff7d39b0b44" exitCode=0 Jan 29 16:24:50 crc kubenswrapper[4886]: I0129 16:24:50.575772 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-76mxm" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.099141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.104963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/75261312-030c-44eb-8d08-07a35f5bcfcc-metrics-certs\") pod \"network-metrics-daemon-c7wkw\" (UID: \"75261312-030c-44eb-8d08-07a35f5bcfcc\") " pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.340582 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c7wkw" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.435210 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.505852 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c425266a-aaff-4684-a7f6-647dcd8073cd-kube-api-access\") pod \"c425266a-aaff-4684-a7f6-647dcd8073cd\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.505912 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c425266a-aaff-4684-a7f6-647dcd8073cd-kubelet-dir\") pod \"c425266a-aaff-4684-a7f6-647dcd8073cd\" (UID: \"c425266a-aaff-4684-a7f6-647dcd8073cd\") " Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.506211 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c425266a-aaff-4684-a7f6-647dcd8073cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c425266a-aaff-4684-a7f6-647dcd8073cd" (UID: "c425266a-aaff-4684-a7f6-647dcd8073cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.516170 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c425266a-aaff-4684-a7f6-647dcd8073cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c425266a-aaff-4684-a7f6-647dcd8073cd" (UID: "c425266a-aaff-4684-a7f6-647dcd8073cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.608145 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c425266a-aaff-4684-a7f6-647dcd8073cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.608296 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c425266a-aaff-4684-a7f6-647dcd8073cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:51 crc kubenswrapper[4886]: I0129 16:24:51.746875 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c7wkw"] Jan 29 16:24:51 crc kubenswrapper[4886]: W0129 16:24:51.762249 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75261312_030c_44eb_8d08_07a35f5bcfcc.slice/crio-f6f025886657235761e9607e750f6f68243a3b0a14c2aa236b4df5b15ea9da38 WatchSource:0}: Error finding container f6f025886657235761e9607e750f6f68243a3b0a14c2aa236b4df5b15ea9da38: Status 404 returned error can't find the container with id f6f025886657235761e9607e750f6f68243a3b0a14c2aa236b4df5b15ea9da38 Jan 29 16:24:52 crc kubenswrapper[4886]: I0129 16:24:52.199995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" event={"ID":"75261312-030c-44eb-8d08-07a35f5bcfcc","Type":"ContainerStarted","Data":"f6f025886657235761e9607e750f6f68243a3b0a14c2aa236b4df5b15ea9da38"} Jan 29 16:24:52 crc kubenswrapper[4886]: I0129 16:24:52.205299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c425266a-aaff-4684-a7f6-647dcd8073cd","Type":"ContainerDied","Data":"88393935927efd21cf9a8daa09c7d8e9e03573373366f935370e8ae9125e969b"} Jan 29 16:24:52 crc kubenswrapper[4886]: I0129 16:24:52.205354 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88393935927efd21cf9a8daa09c7d8e9e03573373366f935370e8ae9125e969b" Jan 29 16:24:52 crc kubenswrapper[4886]: I0129 16:24:52.205512 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:24:53 crc kubenswrapper[4886]: I0129 16:24:53.216458 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" event={"ID":"75261312-030c-44eb-8d08-07a35f5bcfcc","Type":"ContainerStarted","Data":"b82a434a3e80a315483236d47fdb2e394500af27d39644f727f06fe03a562e82"} Jan 29 16:24:54 crc kubenswrapper[4886]: I0129 16:24:54.752704 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:54 crc kubenswrapper[4886]: I0129 16:24:54.756885 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:24:54 crc kubenswrapper[4886]: I0129 16:24:54.934391 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:54 crc kubenswrapper[4886]: I0129 16:24:54.934466 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:54 crc kubenswrapper[4886]: I0129 16:24:54.934819 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-wczvq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 16:24:54 crc kubenswrapper[4886]: I0129 16:24:54.934885 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wczvq" podUID="d677ab93-2fac-4612-8558-8ffc559d5247" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 16:24:59 crc kubenswrapper[4886]: I0129 16:24:59.660930 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:24:59 crc kubenswrapper[4886]: I0129 16:24:59.661363 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:25:03 crc kubenswrapper[4886]: I0129 16:25:03.116561 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:25:04 crc kubenswrapper[4886]: I0129 16:25:04.940420 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wczvq" Jan 29 16:25:10 crc kubenswrapper[4886]: I0129 16:25:10.671958 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:25:15 crc kubenswrapper[4886]: I0129 16:25:15.829701 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pwcz" Jan 29 16:25:22 crc kubenswrapper[4886]: E0129 16:25:22.345905 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:25:22 crc kubenswrapper[4886]: E0129 16:25:22.346518 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkvmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4jbxl_openshift-marketplace(a710476e-74f4-4f7e-ab94-d2428bade61e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:25:22 crc kubenswrapper[4886]: E0129 16:25:22.347791 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4jbxl" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.484969 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:25:24 crc kubenswrapper[4886]: E0129 16:25:24.485653 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c425266a-aaff-4684-a7f6-647dcd8073cd" containerName="pruner" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.485677 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c425266a-aaff-4684-a7f6-647dcd8073cd" containerName="pruner" Jan 29 16:25:24 crc kubenswrapper[4886]: E0129 16:25:24.485708 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" containerName="collect-profiles" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.485722 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" containerName="collect-profiles" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.485907 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c425266a-aaff-4684-a7f6-647dcd8073cd" containerName="pruner" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.485927 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" containerName="collect-profiles" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.486651 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.491218 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.491476 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.493508 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.571050 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.571168 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.672914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.673016 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.673059 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.703074 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:24 crc kubenswrapper[4886]: I0129 16:25:24.810301 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.661172 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.661300 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.878008 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.879178 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.893363 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.941776 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-var-lock\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.941848 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:29 crc kubenswrapper[4886]: I0129 16:25:29.941871 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kube-api-access\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.043424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-var-lock\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.043514 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.043534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-var-lock\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.043544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kube-api-access\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.043606 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.062632 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kube-api-access\") pod \"installer-9-crc\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:30 crc kubenswrapper[4886]: I0129 16:25:30.201654 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:25:31 crc kubenswrapper[4886]: E0129 16:25:31.803348 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4jbxl" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" Jan 29 16:25:31 crc kubenswrapper[4886]: E0129 16:25:31.932244 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:25:31 crc kubenswrapper[4886]: E0129 16:25:31.932434 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf8xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6hph6_openshift-marketplace(c36e6697-37b9-4b10-baea-0f9c92014c79): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:25:31 crc kubenswrapper[4886]: E0129 16:25:31.933532 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6hph6" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.391709 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hph6" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.550350 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.550770 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gjgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cj9vs_openshift-marketplace(434ccaea-8a30-4a97-8908-64bc9f550de0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.551979 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cj9vs" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.556833 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.556950 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96zvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zs9nq_openshift-marketplace(dd20d05f-cd0f-401e-b18a-2f89354792d0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.558110 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zs9nq" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.572201 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.572339 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qjqm7_openshift-marketplace(057806c7-b5ca-43df-91c7-30a2dc58c011): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.573755 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qjqm7" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.595671 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.595940 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn6qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xcj6l_openshift-marketplace(047adc93-cb46-4ba7-bbdf-4d485a08ea6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:25:33 crc kubenswrapper[4886]: E0129 16:25:33.597007 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xcj6l" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" Jan 29 16:25:33 crc kubenswrapper[4886]: I0129 16:25:33.875023 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:25:33 crc kubenswrapper[4886]: I0129 16:25:33.941118 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.482644 4886 generic.go:334] "Generic (PLEG): container finished" podID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerID="ad4b14a6bf7c92a8982778e8545f9b09866de61c9773a7cb6d4bc0cf47f69616" exitCode=0 Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.482948 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psrrq" event={"ID":"9a50cf2f-b08d-4f5c-a364-d939d83aa205","Type":"ContainerDied","Data":"ad4b14a6bf7c92a8982778e8545f9b09866de61c9773a7cb6d4bc0cf47f69616"} Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.485477 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9027a6d8-0cac-4276-b722-08c3a99c6cf9","Type":"ContainerStarted","Data":"c343d7cf431e697a16a8317ad5a319272ba2d6db4aeee174cb506961f6519cb9"} Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.485556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9027a6d8-0cac-4276-b722-08c3a99c6cf9","Type":"ContainerStarted","Data":"c78b07716ffb8a4c7dfa38504f62f4211f74dab5deb70928233e82d0c002e686"} Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.491255 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c7wkw" event={"ID":"75261312-030c-44eb-8d08-07a35f5bcfcc","Type":"ContainerStarted","Data":"950ced0ac9a5eff90b05c747da446c5ff5211d078f2f9f58a338fe45d8eb7f7d"} Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.493719 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerID="ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153" exitCode=0 Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.493864 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzc5s" event={"ID":"d8a07d27-67fb-47e8-9032-e4f831983d75","Type":"ContainerDied","Data":"ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153"} Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.498317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75c03df6-46f4-4ad6-b8ea-7753cceb381c","Type":"ContainerStarted","Data":"1ba042325fa311517cd6ca54caa203c8362d6886b6702721baee36c8cc0278ce"} Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.498386 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75c03df6-46f4-4ad6-b8ea-7753cceb381c","Type":"ContainerStarted","Data":"6574425d693a11022fe7587317598ce19c56d0d9b615f845cea2e20b0ba131a2"} Jan 29 16:25:34 crc kubenswrapper[4886]: E0129 16:25:34.499762 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qjqm7" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" Jan 29 16:25:34 crc kubenswrapper[4886]: E0129 16:25:34.503659 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cj9vs" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" Jan 29 16:25:34 crc kubenswrapper[4886]: E0129 16:25:34.503659 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xcj6l" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" Jan 29 16:25:34 crc kubenswrapper[4886]: E0129 16:25:34.504575 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zs9nq" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.529962 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c7wkw" podStartSLOduration=186.52994336 podStartE2EDuration="3m6.52994336s" podCreationTimestamp="2026-01-29 16:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:25:34.526925028 +0000 UTC m=+217.435644310" watchObservedRunningTime="2026-01-29 16:25:34.52994336 +0000 UTC m=+217.438662642" Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.609172 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.609154559 podStartE2EDuration="10.609154559s" podCreationTimestamp="2026-01-29 16:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:25:34.60754552 +0000 UTC m=+217.516264792" watchObservedRunningTime="2026-01-29 16:25:34.609154559 +0000 UTC m=+217.517873831" Jan 29 16:25:34 crc kubenswrapper[4886]: I0129 16:25:34.637025 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.637004563 podStartE2EDuration="5.637004563s" podCreationTimestamp="2026-01-29 16:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:25:34.633072844 +0000 UTC m=+217.541792136" watchObservedRunningTime="2026-01-29 16:25:34.637004563 +0000 UTC m=+217.545723845" Jan 29 16:25:35 crc kubenswrapper[4886]: I0129 16:25:35.508984 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzc5s" event={"ID":"d8a07d27-67fb-47e8-9032-e4f831983d75","Type":"ContainerStarted","Data":"233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f"} Jan 29 16:25:35 crc kubenswrapper[4886]: I0129 16:25:35.511044 4886 generic.go:334] "Generic (PLEG): container finished" podID="75c03df6-46f4-4ad6-b8ea-7753cceb381c" containerID="1ba042325fa311517cd6ca54caa203c8362d6886b6702721baee36c8cc0278ce" exitCode=0 Jan 29 16:25:35 crc kubenswrapper[4886]: I0129 16:25:35.511155 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75c03df6-46f4-4ad6-b8ea-7753cceb381c","Type":"ContainerDied","Data":"1ba042325fa311517cd6ca54caa203c8362d6886b6702721baee36c8cc0278ce"} Jan 29 16:25:35 crc kubenswrapper[4886]: I0129 16:25:35.514807 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psrrq" event={"ID":"9a50cf2f-b08d-4f5c-a364-d939d83aa205","Type":"ContainerStarted","Data":"f397e8eaabe03e0ca454b7d958ea66e73c641d9b198aff7e2d640f8e165743e6"} Jan 29 16:25:35 crc kubenswrapper[4886]: I0129 16:25:35.531640 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xzc5s" podStartSLOduration=2.721096963 podStartE2EDuration="51.531620602s" podCreationTimestamp="2026-01-29 16:24:44 +0000 UTC" firstStartedPulling="2026-01-29 16:24:46.069105956 +0000 UTC m=+168.977825228" lastFinishedPulling="2026-01-29 16:25:34.879629595 +0000 UTC m=+217.788348867" observedRunningTime="2026-01-29 16:25:35.529929801 +0000 UTC m=+218.438649083" watchObservedRunningTime="2026-01-29 16:25:35.531620602 +0000 UTC m=+218.440339874" Jan 29 16:25:35 crc kubenswrapper[4886]: I0129 16:25:35.574174 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-psrrq" podStartSLOduration=2.461200353 podStartE2EDuration="53.574145241s" podCreationTimestamp="2026-01-29 16:24:42 +0000 UTC" firstStartedPulling="2026-01-29 16:24:43.912744963 +0000 UTC m=+166.821464235" lastFinishedPulling="2026-01-29 16:25:35.025689861 +0000 UTC m=+217.934409123" observedRunningTime="2026-01-29 16:25:35.568479329 +0000 UTC m=+218.477198621" watchObservedRunningTime="2026-01-29 16:25:35.574145241 +0000 UTC m=+218.482864523" Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.740665 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.831754 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kubelet-dir\") pod \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.831839 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kube-api-access\") pod \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\" (UID: \"75c03df6-46f4-4ad6-b8ea-7753cceb381c\") " Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.831903 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75c03df6-46f4-4ad6-b8ea-7753cceb381c" (UID: "75c03df6-46f4-4ad6-b8ea-7753cceb381c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.832114 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.838918 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75c03df6-46f4-4ad6-b8ea-7753cceb381c" (UID: "75c03df6-46f4-4ad6-b8ea-7753cceb381c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:36 crc kubenswrapper[4886]: I0129 16:25:36.933516 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c03df6-46f4-4ad6-b8ea-7753cceb381c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:37 crc kubenswrapper[4886]: I0129 16:25:37.526595 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"75c03df6-46f4-4ad6-b8ea-7753cceb381c","Type":"ContainerDied","Data":"6574425d693a11022fe7587317598ce19c56d0d9b615f845cea2e20b0ba131a2"} Jan 29 16:25:37 crc kubenswrapper[4886]: I0129 16:25:37.526645 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6574425d693a11022fe7587317598ce19c56d0d9b615f845cea2e20b0ba131a2" Jan 29 16:25:37 crc kubenswrapper[4886]: I0129 16:25:37.526671 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:25:43 crc kubenswrapper[4886]: I0129 16:25:43.031626 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:25:43 crc kubenswrapper[4886]: I0129 16:25:43.032479 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:25:43 crc kubenswrapper[4886]: I0129 16:25:43.536632 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:25:43 crc kubenswrapper[4886]: I0129 16:25:43.619890 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:25:43 crc kubenswrapper[4886]: I0129 16:25:43.912111 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psrrq"] Jan 29 16:25:44 crc kubenswrapper[4886]: I0129 16:25:44.822166 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:25:44 crc kubenswrapper[4886]: I0129 16:25:44.823834 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:25:44 crc kubenswrapper[4886]: I0129 16:25:44.877682 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:25:45 crc kubenswrapper[4886]: I0129 16:25:45.568529 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-psrrq" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="registry-server" containerID="cri-o://f397e8eaabe03e0ca454b7d958ea66e73c641d9b198aff7e2d640f8e165743e6" gracePeriod=2 Jan 29 16:25:45 crc kubenswrapper[4886]: I0129 16:25:45.612095 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:25:46 crc kubenswrapper[4886]: I0129 16:25:46.575951 4886 generic.go:334] "Generic (PLEG): container finished" podID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerID="f397e8eaabe03e0ca454b7d958ea66e73c641d9b198aff7e2d640f8e165743e6" exitCode=0 Jan 29 16:25:46 crc kubenswrapper[4886]: I0129 16:25:46.576025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psrrq" event={"ID":"9a50cf2f-b08d-4f5c-a364-d939d83aa205","Type":"ContainerDied","Data":"f397e8eaabe03e0ca454b7d958ea66e73c641d9b198aff7e2d640f8e165743e6"} Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.422484 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.482600 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g68vd\" (UniqueName: \"kubernetes.io/projected/9a50cf2f-b08d-4f5c-a364-d939d83aa205-kube-api-access-g68vd\") pod \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.482954 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-utilities\") pod \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.483097 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-catalog-content\") pod \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\" (UID: \"9a50cf2f-b08d-4f5c-a364-d939d83aa205\") " Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.483646 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-utilities" (OuterVolumeSpecName: "utilities") pod "9a50cf2f-b08d-4f5c-a364-d939d83aa205" (UID: "9a50cf2f-b08d-4f5c-a364-d939d83aa205"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.487594 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a50cf2f-b08d-4f5c-a364-d939d83aa205-kube-api-access-g68vd" (OuterVolumeSpecName: "kube-api-access-g68vd") pod "9a50cf2f-b08d-4f5c-a364-d939d83aa205" (UID: "9a50cf2f-b08d-4f5c-a364-d939d83aa205"). InnerVolumeSpecName "kube-api-access-g68vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.538232 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a50cf2f-b08d-4f5c-a364-d939d83aa205" (UID: "9a50cf2f-b08d-4f5c-a364-d939d83aa205"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.581920 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerStarted","Data":"f05d2d4560320194303a8b36647eaf5baeadb47a7241ddfee9698d44fa4aaa4c"} Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.584260 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.584287 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g68vd\" (UniqueName: \"kubernetes.io/projected/9a50cf2f-b08d-4f5c-a364-d939d83aa205-kube-api-access-g68vd\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.584299 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a50cf2f-b08d-4f5c-a364-d939d83aa205-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.585015 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psrrq" event={"ID":"9a50cf2f-b08d-4f5c-a364-d939d83aa205","Type":"ContainerDied","Data":"977a500ff43da21b72edc2242140ccdd69d26da152fa09c76f29609579032cbf"} Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.585054 4886 scope.go:117] "RemoveContainer" containerID="f397e8eaabe03e0ca454b7d958ea66e73c641d9b198aff7e2d640f8e165743e6" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.585101 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psrrq" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.599783 4886 scope.go:117] "RemoveContainer" containerID="ad4b14a6bf7c92a8982778e8545f9b09866de61c9773a7cb6d4bc0cf47f69616" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.620171 4886 scope.go:117] "RemoveContainer" containerID="f96346bc3cddc5b5f42583c8eb8f6cc35656bf523771e55b7bf0bb6b9c122669" Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.632401 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psrrq"] Jan 29 16:25:47 crc kubenswrapper[4886]: I0129 16:25:47.635767 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-psrrq"] Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.597880 4886 generic.go:334] "Generic (PLEG): container finished" podID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerID="2838a0bdd722f9e7f7de971f3ef56f281b5be560ab82b4ce2dc92224cbf0042f" exitCode=0 Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.597981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zs9nq" event={"ID":"dd20d05f-cd0f-401e-b18a-2f89354792d0","Type":"ContainerDied","Data":"2838a0bdd722f9e7f7de971f3ef56f281b5be560ab82b4ce2dc92224cbf0042f"} Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.606514 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerStarted","Data":"11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf"} Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.609314 4886 generic.go:334] "Generic (PLEG): container finished" podID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerID="f05d2d4560320194303a8b36647eaf5baeadb47a7241ddfee9698d44fa4aaa4c" exitCode=0 Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.609392 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerDied","Data":"f05d2d4560320194303a8b36647eaf5baeadb47a7241ddfee9698d44fa4aaa4c"} Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.613709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerStarted","Data":"7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e"} Jan 29 16:25:48 crc kubenswrapper[4886]: I0129 16:25:48.629479 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" path="/var/lib/kubelet/pods/9a50cf2f-b08d-4f5c-a364-d939d83aa205/volumes" Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.622059 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerStarted","Data":"b22791e3d9f615101442f2f7febeb8dc3309e984e4f279202303392053825edf"} Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.626256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerStarted","Data":"5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da"} Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.634647 4886 generic.go:334] "Generic (PLEG): container finished" podID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerID="7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e" exitCode=0 Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.634725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerDied","Data":"7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e"} Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.643222 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4jbxl" podStartSLOduration=3.704723832 podStartE2EDuration="1m4.643204906s" podCreationTimestamp="2026-01-29 16:24:45 +0000 UTC" firstStartedPulling="2026-01-29 16:24:48.14957817 +0000 UTC m=+171.058297442" lastFinishedPulling="2026-01-29 16:25:49.088059224 +0000 UTC m=+231.996778516" observedRunningTime="2026-01-29 16:25:49.641088302 +0000 UTC m=+232.549807574" watchObservedRunningTime="2026-01-29 16:25:49.643204906 +0000 UTC m=+232.551924178" Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.643406 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zs9nq" event={"ID":"dd20d05f-cd0f-401e-b18a-2f89354792d0","Type":"ContainerStarted","Data":"3aec1abede58b8faa82b73ab79ff75672caa26cb287c28081010173343956dcc"} Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.649297 4886 generic.go:334] "Generic (PLEG): container finished" podID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerID="11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf" exitCode=0 Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.649351 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerDied","Data":"11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf"} Jan 29 16:25:49 crc kubenswrapper[4886]: I0129 16:25:49.753769 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zs9nq" podStartSLOduration=3.719074725 podStartE2EDuration="1m5.753749546s" podCreationTimestamp="2026-01-29 16:24:44 +0000 UTC" firstStartedPulling="2026-01-29 16:24:47.090733514 +0000 UTC m=+169.999452786" lastFinishedPulling="2026-01-29 16:25:49.125408335 +0000 UTC m=+232.034127607" observedRunningTime="2026-01-29 16:25:49.74959979 +0000 UTC m=+232.658319052" watchObservedRunningTime="2026-01-29 16:25:49.753749546 +0000 UTC m=+232.662468818" Jan 29 16:25:50 crc kubenswrapper[4886]: I0129 16:25:50.664239 4886 generic.go:334] "Generic (PLEG): container finished" podID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerID="307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc" exitCode=0 Jan 29 16:25:50 crc kubenswrapper[4886]: I0129 16:25:50.664552 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqm7" event={"ID":"057806c7-b5ca-43df-91c7-30a2dc58c011","Type":"ContainerDied","Data":"307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc"} Jan 29 16:25:50 crc kubenswrapper[4886]: I0129 16:25:50.670655 4886 generic.go:334] "Generic (PLEG): container finished" podID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerID="5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da" exitCode=0 Jan 29 16:25:50 crc kubenswrapper[4886]: I0129 16:25:50.670714 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerDied","Data":"5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da"} Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.677869 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerStarted","Data":"adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b"} Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.680466 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerStarted","Data":"9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc"} Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.682621 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerStarted","Data":"bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b"} Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.684959 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqm7" event={"ID":"057806c7-b5ca-43df-91c7-30a2dc58c011","Type":"ContainerStarted","Data":"b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb"} Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.707352 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cj9vs" podStartSLOduration=2.398803155 podStartE2EDuration="1m9.707314324s" podCreationTimestamp="2026-01-29 16:24:42 +0000 UTC" firstStartedPulling="2026-01-29 16:24:43.920422069 +0000 UTC m=+166.829141341" lastFinishedPulling="2026-01-29 16:25:51.228933238 +0000 UTC m=+234.137652510" observedRunningTime="2026-01-29 16:25:51.706702535 +0000 UTC m=+234.615421807" watchObservedRunningTime="2026-01-29 16:25:51.707314324 +0000 UTC m=+234.616033596" Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.730365 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qjqm7" podStartSLOduration=2.357218062 podStartE2EDuration="1m9.730347852s" podCreationTimestamp="2026-01-29 16:24:42 +0000 UTC" firstStartedPulling="2026-01-29 16:24:43.937344359 +0000 UTC m=+166.846063641" lastFinishedPulling="2026-01-29 16:25:51.310474159 +0000 UTC m=+234.219193431" observedRunningTime="2026-01-29 16:25:51.728608589 +0000 UTC m=+234.637327861" watchObservedRunningTime="2026-01-29 16:25:51.730347852 +0000 UTC m=+234.639067124" Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.771493 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6hph6" podStartSLOduration=2.900802474 podStartE2EDuration="1m6.771476478s" podCreationTimestamp="2026-01-29 16:24:45 +0000 UTC" firstStartedPulling="2026-01-29 16:24:47.087067166 +0000 UTC m=+169.995786438" lastFinishedPulling="2026-01-29 16:25:50.95774117 +0000 UTC m=+233.866460442" observedRunningTime="2026-01-29 16:25:51.770046965 +0000 UTC m=+234.678766237" watchObservedRunningTime="2026-01-29 16:25:51.771476478 +0000 UTC m=+234.680195740" Jan 29 16:25:51 crc kubenswrapper[4886]: I0129 16:25:51.772444 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcj6l" podStartSLOduration=3.093435544 podStartE2EDuration="1m9.772436197s" podCreationTimestamp="2026-01-29 16:24:42 +0000 UTC" firstStartedPulling="2026-01-29 16:24:43.934422433 +0000 UTC m=+166.843141715" lastFinishedPulling="2026-01-29 16:25:50.613423096 +0000 UTC m=+233.522142368" observedRunningTime="2026-01-29 16:25:51.750372269 +0000 UTC m=+234.659091541" watchObservedRunningTime="2026-01-29 16:25:51.772436197 +0000 UTC m=+234.681155469" Jan 29 16:25:52 crc kubenswrapper[4886]: I0129 16:25:52.630039 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:25:52 crc kubenswrapper[4886]: I0129 16:25:52.630104 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:25:52 crc kubenswrapper[4886]: I0129 16:25:52.804876 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:25:52 crc kubenswrapper[4886]: I0129 16:25:52.804941 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:25:52 crc kubenswrapper[4886]: I0129 16:25:52.859022 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:25:53 crc kubenswrapper[4886]: I0129 16:25:53.269508 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:25:53 crc kubenswrapper[4886]: I0129 16:25:53.269587 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:25:53 crc kubenswrapper[4886]: I0129 16:25:53.670184 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cj9vs" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="registry-server" probeResult="failure" output=< Jan 29 16:25:53 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 16:25:53 crc kubenswrapper[4886]: > Jan 29 16:25:54 crc kubenswrapper[4886]: I0129 16:25:54.306312 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qjqm7" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="registry-server" probeResult="failure" output=< Jan 29 16:25:54 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 16:25:54 crc kubenswrapper[4886]: > Jan 29 16:25:55 crc kubenswrapper[4886]: I0129 16:25:55.226026 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:25:55 crc kubenswrapper[4886]: I0129 16:25:55.226093 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:25:55 crc kubenswrapper[4886]: I0129 16:25:55.269894 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:25:55 crc kubenswrapper[4886]: I0129 16:25:55.741787 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:25:55 crc kubenswrapper[4886]: I0129 16:25:55.835365 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:25:55 crc kubenswrapper[4886]: I0129 16:25:55.835610 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:25:56 crc kubenswrapper[4886]: I0129 16:25:56.252385 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:25:56 crc kubenswrapper[4886]: I0129 16:25:56.252440 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:25:56 crc kubenswrapper[4886]: I0129 16:25:56.335835 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:25:56 crc kubenswrapper[4886]: I0129 16:25:56.751662 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:25:56 crc kubenswrapper[4886]: I0129 16:25:56.873443 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6hph6" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="registry-server" probeResult="failure" output=< Jan 29 16:25:56 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 16:25:56 crc kubenswrapper[4886]: > Jan 29 16:25:58 crc kubenswrapper[4886]: I0129 16:25:58.927485 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zs9nq"] Jan 29 16:25:58 crc kubenswrapper[4886]: I0129 16:25:58.927857 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zs9nq" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="registry-server" containerID="cri-o://3aec1abede58b8faa82b73ab79ff75672caa26cb287c28081010173343956dcc" gracePeriod=2 Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.660569 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.660947 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.661134 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.662018 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.662359 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028" gracePeriod=600 Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.913993 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jbxl"] Jan 29 16:25:59 crc kubenswrapper[4886]: I0129 16:25:59.914405 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4jbxl" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="registry-server" containerID="cri-o://b22791e3d9f615101442f2f7febeb8dc3309e984e4f279202303392053825edf" gracePeriod=2 Jan 29 16:26:01 crc kubenswrapper[4886]: I0129 16:26:01.743584 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028" exitCode=0 Jan 29 16:26:01 crc kubenswrapper[4886]: I0129 16:26:01.743700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028"} Jan 29 16:26:01 crc kubenswrapper[4886]: I0129 16:26:01.747662 4886 generic.go:334] "Generic (PLEG): container finished" podID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerID="3aec1abede58b8faa82b73ab79ff75672caa26cb287c28081010173343956dcc" exitCode=0 Jan 29 16:26:01 crc kubenswrapper[4886]: I0129 16:26:01.747704 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zs9nq" event={"ID":"dd20d05f-cd0f-401e-b18a-2f89354792d0","Type":"ContainerDied","Data":"3aec1abede58b8faa82b73ab79ff75672caa26cb287c28081010173343956dcc"} Jan 29 16:26:02 crc kubenswrapper[4886]: I0129 16:26:02.688147 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:26:02 crc kubenswrapper[4886]: I0129 16:26:02.740936 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:26:02 crc kubenswrapper[4886]: I0129 16:26:02.761522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerDied","Data":"b22791e3d9f615101442f2f7febeb8dc3309e984e4f279202303392053825edf"} Jan 29 16:26:02 crc kubenswrapper[4886]: I0129 16:26:02.761609 4886 generic.go:334] "Generic (PLEG): container finished" podID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerID="b22791e3d9f615101442f2f7febeb8dc3309e984e4f279202303392053825edf" exitCode=0 Jan 29 16:26:02 crc kubenswrapper[4886]: I0129 16:26:02.843248 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:26:03 crc kubenswrapper[4886]: I0129 16:26:03.316284 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:26:03 crc kubenswrapper[4886]: I0129 16:26:03.353447 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.035623 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.116557 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-utilities\") pod \"dd20d05f-cd0f-401e-b18a-2f89354792d0\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.116630 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96zvz\" (UniqueName: \"kubernetes.io/projected/dd20d05f-cd0f-401e-b18a-2f89354792d0-kube-api-access-96zvz\") pod \"dd20d05f-cd0f-401e-b18a-2f89354792d0\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.116671 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-catalog-content\") pod \"dd20d05f-cd0f-401e-b18a-2f89354792d0\" (UID: \"dd20d05f-cd0f-401e-b18a-2f89354792d0\") " Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.117742 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-utilities" (OuterVolumeSpecName: "utilities") pod "dd20d05f-cd0f-401e-b18a-2f89354792d0" (UID: "dd20d05f-cd0f-401e-b18a-2f89354792d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.123074 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd20d05f-cd0f-401e-b18a-2f89354792d0-kube-api-access-96zvz" (OuterVolumeSpecName: "kube-api-access-96zvz") pod "dd20d05f-cd0f-401e-b18a-2f89354792d0" (UID: "dd20d05f-cd0f-401e-b18a-2f89354792d0"). InnerVolumeSpecName "kube-api-access-96zvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.157097 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd20d05f-cd0f-401e-b18a-2f89354792d0" (UID: "dd20d05f-cd0f-401e-b18a-2f89354792d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.218114 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.218143 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96zvz\" (UniqueName: \"kubernetes.io/projected/dd20d05f-cd0f-401e-b18a-2f89354792d0-kube-api-access-96zvz\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.218153 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd20d05f-cd0f-401e-b18a-2f89354792d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.347531 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.419493 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-utilities\") pod \"a710476e-74f4-4f7e-ab94-d2428bade61e\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.419627 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkvmx\" (UniqueName: \"kubernetes.io/projected/a710476e-74f4-4f7e-ab94-d2428bade61e-kube-api-access-kkvmx\") pod \"a710476e-74f4-4f7e-ab94-d2428bade61e\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.419657 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-catalog-content\") pod \"a710476e-74f4-4f7e-ab94-d2428bade61e\" (UID: \"a710476e-74f4-4f7e-ab94-d2428bade61e\") " Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.421546 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-utilities" (OuterVolumeSpecName: "utilities") pod "a710476e-74f4-4f7e-ab94-d2428bade61e" (UID: "a710476e-74f4-4f7e-ab94-d2428bade61e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.423056 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a710476e-74f4-4f7e-ab94-d2428bade61e-kube-api-access-kkvmx" (OuterVolumeSpecName: "kube-api-access-kkvmx") pod "a710476e-74f4-4f7e-ab94-d2428bade61e" (UID: "a710476e-74f4-4f7e-ab94-d2428bade61e"). InnerVolumeSpecName "kube-api-access-kkvmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.521793 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkvmx\" (UniqueName: \"kubernetes.io/projected/a710476e-74f4-4f7e-ab94-d2428bade61e-kube-api-access-kkvmx\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.521872 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.781486 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"96fb4b3b0684eec0f8e815c984345d77640459634c9d28cbf8434505ebf34891"} Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.783743 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zs9nq" event={"ID":"dd20d05f-cd0f-401e-b18a-2f89354792d0","Type":"ContainerDied","Data":"3a14ec6fcf7e574cbb7bb1e550a27abeaf3193fe3131800ddd76cb089990f9d3"} Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.783780 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zs9nq" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.783781 4886 scope.go:117] "RemoveContainer" containerID="3aec1abede58b8faa82b73ab79ff75672caa26cb287c28081010173343956dcc" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.786506 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jbxl" event={"ID":"a710476e-74f4-4f7e-ab94-d2428bade61e","Type":"ContainerDied","Data":"1b7c7ac95d6deb14d58d68d8614d14207966e7b0c294b7297faa9446ddd99953"} Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.786617 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jbxl" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.809241 4886 scope.go:117] "RemoveContainer" containerID="2838a0bdd722f9e7f7de971f3ef56f281b5be560ab82b4ce2dc92224cbf0042f" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.813409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a710476e-74f4-4f7e-ab94-d2428bade61e" (UID: "a710476e-74f4-4f7e-ab94-d2428bade61e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.814766 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zs9nq"] Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.818702 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zs9nq"] Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.826713 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a710476e-74f4-4f7e-ab94-d2428bade61e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.840021 4886 scope.go:117] "RemoveContainer" containerID="993aeae10b51b9ba867b7ad588cb7c6e7651b0c3345b073059af7a58ad9790c3" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.864902 4886 scope.go:117] "RemoveContainer" containerID="b22791e3d9f615101442f2f7febeb8dc3309e984e4f279202303392053825edf" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.932745 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mpttg"] Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.938906 4886 scope.go:117] "RemoveContainer" containerID="f05d2d4560320194303a8b36647eaf5baeadb47a7241ddfee9698d44fa4aaa4c" Jan 29 16:26:04 crc kubenswrapper[4886]: I0129 16:26:04.959636 4886 scope.go:117] "RemoveContainer" containerID="542d74b470422150123685d3edf24455da6a5470e04d40768b0ed7b1e8d27bc4" Jan 29 16:26:05 crc kubenswrapper[4886]: I0129 16:26:05.114645 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jbxl"] Jan 29 16:26:05 crc kubenswrapper[4886]: I0129 16:26:05.114690 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjqm7"] Jan 29 16:26:05 crc kubenswrapper[4886]: I0129 16:26:05.114895 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qjqm7" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="registry-server" containerID="cri-o://b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb" gracePeriod=2 Jan 29 16:26:05 crc kubenswrapper[4886]: I0129 16:26:05.124534 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4jbxl"] Jan 29 16:26:05 crc kubenswrapper[4886]: I0129 16:26:05.887483 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:26:05 crc kubenswrapper[4886]: I0129 16:26:05.930982 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.620960 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" path="/var/lib/kubelet/pods/a710476e-74f4-4f7e-ab94-d2428bade61e/volumes" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.632110 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" path="/var/lib/kubelet/pods/dd20d05f-cd0f-401e-b18a-2f89354792d0/volumes" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.780284 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.811341 4886 generic.go:334] "Generic (PLEG): container finished" podID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerID="b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb" exitCode=0 Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.811409 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqm7" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.811430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqm7" event={"ID":"057806c7-b5ca-43df-91c7-30a2dc58c011","Type":"ContainerDied","Data":"b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb"} Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.811484 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqm7" event={"ID":"057806c7-b5ca-43df-91c7-30a2dc58c011","Type":"ContainerDied","Data":"b80b8058bdb8fd4eef83ffeccee0a93733e929325e740b25b1e55fdba478cf66"} Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.811504 4886 scope.go:117] "RemoveContainer" containerID="b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.829208 4886 scope.go:117] "RemoveContainer" containerID="307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.843804 4886 scope.go:117] "RemoveContainer" containerID="a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.851313 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs29d\" (UniqueName: \"kubernetes.io/projected/057806c7-b5ca-43df-91c7-30a2dc58c011-kube-api-access-gs29d\") pod \"057806c7-b5ca-43df-91c7-30a2dc58c011\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.851424 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-catalog-content\") pod \"057806c7-b5ca-43df-91c7-30a2dc58c011\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.851513 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-utilities\") pod \"057806c7-b5ca-43df-91c7-30a2dc58c011\" (UID: \"057806c7-b5ca-43df-91c7-30a2dc58c011\") " Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.853119 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-utilities" (OuterVolumeSpecName: "utilities") pod "057806c7-b5ca-43df-91c7-30a2dc58c011" (UID: "057806c7-b5ca-43df-91c7-30a2dc58c011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.858277 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057806c7-b5ca-43df-91c7-30a2dc58c011-kube-api-access-gs29d" (OuterVolumeSpecName: "kube-api-access-gs29d") pod "057806c7-b5ca-43df-91c7-30a2dc58c011" (UID: "057806c7-b5ca-43df-91c7-30a2dc58c011"). InnerVolumeSpecName "kube-api-access-gs29d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.863599 4886 scope.go:117] "RemoveContainer" containerID="b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb" Jan 29 16:26:06 crc kubenswrapper[4886]: E0129 16:26:06.864126 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb\": container with ID starting with b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb not found: ID does not exist" containerID="b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.864162 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb"} err="failed to get container status \"b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb\": rpc error: code = NotFound desc = could not find container \"b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb\": container with ID starting with b7cd9c63904e404fe9446a1ff9402be281118c2ffb2023c64847b10d15f887eb not found: ID does not exist" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.864188 4886 scope.go:117] "RemoveContainer" containerID="307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc" Jan 29 16:26:06 crc kubenswrapper[4886]: E0129 16:26:06.864548 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc\": container with ID starting with 307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc not found: ID does not exist" containerID="307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.864573 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc"} err="failed to get container status \"307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc\": rpc error: code = NotFound desc = could not find container \"307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc\": container with ID starting with 307811e0c4081bf12c363b76eff5629bd7ac5901479db6027a6bd50e6cae2ccc not found: ID does not exist" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.864586 4886 scope.go:117] "RemoveContainer" containerID="a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312" Jan 29 16:26:06 crc kubenswrapper[4886]: E0129 16:26:06.864844 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312\": container with ID starting with a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312 not found: ID does not exist" containerID="a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.864879 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312"} err="failed to get container status \"a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312\": rpc error: code = NotFound desc = could not find container \"a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312\": container with ID starting with a2a6cbc6c2cee221b3e74aba38fce6c75da0d8e08f7766fa4a0eb1f485c41312 not found: ID does not exist" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.902031 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "057806c7-b5ca-43df-91c7-30a2dc58c011" (UID: "057806c7-b5ca-43df-91c7-30a2dc58c011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.952733 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.952776 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs29d\" (UniqueName: \"kubernetes.io/projected/057806c7-b5ca-43df-91c7-30a2dc58c011-kube-api-access-gs29d\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:06 crc kubenswrapper[4886]: I0129 16:26:06.952789 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057806c7-b5ca-43df-91c7-30a2dc58c011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:07 crc kubenswrapper[4886]: I0129 16:26:07.144630 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjqm7"] Jan 29 16:26:07 crc kubenswrapper[4886]: I0129 16:26:07.148262 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qjqm7"] Jan 29 16:26:08 crc kubenswrapper[4886]: I0129 16:26:08.625747 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" path="/var/lib/kubelet/pods/057806c7-b5ca-43df-91c7-30a2dc58c011/volumes" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.884194 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.884944 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.884957 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.884966 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.884972 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.884981 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.884987 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.884996 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885004 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885014 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885020 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885030 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885035 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885045 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885050 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885058 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885064 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885071 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885077 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885088 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885094 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885103 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c03df6-46f4-4ad6-b8ea-7753cceb381c" containerName="pruner" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885108 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c03df6-46f4-4ad6-b8ea-7753cceb381c" containerName="pruner" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885115 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885120 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="extract-utilities" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.885127 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885133 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="extract-content" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885225 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c03df6-46f4-4ad6-b8ea-7753cceb381c" containerName="pruner" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885237 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd20d05f-cd0f-401e-b18a-2f89354792d0" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885243 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="057806c7-b5ca-43df-91c7-30a2dc58c011" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885253 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a710476e-74f4-4f7e-ab94-d2428bade61e" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885261 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a50cf2f-b08d-4f5c-a364-d939d83aa205" containerName="registry-server" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885557 4886 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885774 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.885918 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc" gracePeriod=15 Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.886005 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930" gracePeriod=15 Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.886058 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749" gracePeriod=15 Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.886080 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981" gracePeriod=15 Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.886387 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff" gracePeriod=15 Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888107 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888254 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888265 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888275 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888281 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888288 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888293 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888300 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888306 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888323 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888349 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888358 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888364 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888376 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888382 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888462 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888471 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888480 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888487 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888496 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888507 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888514 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:26:11 crc kubenswrapper[4886]: E0129 16:26:11.888593 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.888599 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.911526 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.911604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.911694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.912242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.912397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.912457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.913293 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.913372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:11 crc kubenswrapper[4886]: I0129 16:26:11.921500 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014217 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014529 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014570 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014589 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014578 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014621 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014636 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014669 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014692 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014746 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014780 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.014817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.219867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:26:12 crc kubenswrapper[4886]: W0129 16:26:12.241269 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-0386712ddaeec2f4509b379ed96a080a64a88a43a08f6f7600c59c97f88bb567 WatchSource:0}: Error finding container 0386712ddaeec2f4509b379ed96a080a64a88a43a08f6f7600c59c97f88bb567: Status 404 returned error can't find the container with id 0386712ddaeec2f4509b379ed96a080a64a88a43a08f6f7600c59c97f88bb567 Jan 29 16:26:12 crc kubenswrapper[4886]: E0129 16:26:12.245001 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f4062ef2fd167 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:26:12.243755367 +0000 UTC m=+255.152474659,LastTimestamp:2026-01-29 16:26:12.243755367 +0000 UTC m=+255.152474659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:26:12 crc kubenswrapper[4886]: E0129 16:26:12.439213 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f4062ef2fd167 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:26:12.243755367 +0000 UTC m=+255.152474659,LastTimestamp:2026-01-29 16:26:12.243755367 +0000 UTC m=+255.152474659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.843595 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e338e481af24aecd5ce5485aecf3d5729c1fbb23b68efbbc211fd833fc6aa1fa"} Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.843659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0386712ddaeec2f4509b379ed96a080a64a88a43a08f6f7600c59c97f88bb567"} Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.844303 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.845080 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.846064 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.846624 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930" exitCode=0 Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.846643 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff" exitCode=0 Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.846653 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749" exitCode=0 Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.846661 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981" exitCode=2 Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.846710 4886 scope.go:117] "RemoveContainer" containerID="8bbfe403372c663d59079e8c4111846693950b0eca93a07be737c20775395f88" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.848203 4886 generic.go:334] "Generic (PLEG): container finished" podID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" containerID="c343d7cf431e697a16a8317ad5a319272ba2d6db4aeee174cb506961f6519cb9" exitCode=0 Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.848244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9027a6d8-0cac-4276-b722-08c3a99c6cf9","Type":"ContainerDied","Data":"c343d7cf431e697a16a8317ad5a319272ba2d6db4aeee174cb506961f6519cb9"} Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.848835 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:12 crc kubenswrapper[4886]: I0129 16:26:12.849223 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:13 crc kubenswrapper[4886]: I0129 16:26:13.855976 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.157989 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.158668 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.158929 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.257154 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.258077 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.258746 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.259212 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.259632 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.341561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kubelet-dir\") pod \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.341640 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kube-api-access\") pod \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.341691 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-var-lock\") pod \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\" (UID: \"9027a6d8-0cac-4276-b722-08c3a99c6cf9\") " Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.341706 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9027a6d8-0cac-4276-b722-08c3a99c6cf9" (UID: "9027a6d8-0cac-4276-b722-08c3a99c6cf9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.341848 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-var-lock" (OuterVolumeSpecName: "var-lock") pod "9027a6d8-0cac-4276-b722-08c3a99c6cf9" (UID: "9027a6d8-0cac-4276-b722-08c3a99c6cf9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.342080 4886 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.342103 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.347488 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9027a6d8-0cac-4276-b722-08c3a99c6cf9" (UID: "9027a6d8-0cac-4276-b722-08c3a99c6cf9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.443569 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.443774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.443818 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.443867 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.443872 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.443971 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.444146 4886 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.444166 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9027a6d8-0cac-4276-b722-08c3a99c6cf9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.444178 4886 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.444192 4886 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.622580 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.865934 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.867279 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc" exitCode=0 Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.867378 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.867380 4886 scope.go:117] "RemoveContainer" containerID="2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.868045 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.868258 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.868494 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.868674 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9027a6d8-0cac-4276-b722-08c3a99c6cf9","Type":"ContainerDied","Data":"c78b07716ffb8a4c7dfa38504f62f4211f74dab5deb70928233e82d0c002e686"} Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.868705 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.868696 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78b07716ffb8a4c7dfa38504f62f4211f74dab5deb70928233e82d0c002e686" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.871082 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.871308 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.871739 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.873082 4886 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.873377 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.873686 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.881620 4886 scope.go:117] "RemoveContainer" containerID="40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.897750 4886 scope.go:117] "RemoveContainer" containerID="2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.912553 4886 scope.go:117] "RemoveContainer" containerID="ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.926167 4886 scope.go:117] "RemoveContainer" containerID="b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.940880 4886 scope.go:117] "RemoveContainer" containerID="92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.957314 4886 scope.go:117] "RemoveContainer" containerID="2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930" Jan 29 16:26:14 crc kubenswrapper[4886]: E0129 16:26:14.957760 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\": container with ID starting with 2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930 not found: ID does not exist" containerID="2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.957901 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930"} err="failed to get container status \"2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\": rpc error: code = NotFound desc = could not find container \"2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930\": container with ID starting with 2d2126e0e150d4a578976def8715d596ae31d0561b0eaa832061d4fb86a8a930 not found: ID does not exist" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.958008 4886 scope.go:117] "RemoveContainer" containerID="40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff" Jan 29 16:26:14 crc kubenswrapper[4886]: E0129 16:26:14.958522 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\": container with ID starting with 40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff not found: ID does not exist" containerID="40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.958557 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff"} err="failed to get container status \"40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\": rpc error: code = NotFound desc = could not find container \"40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff\": container with ID starting with 40c80ff4d9a5e63764163d3748d2ade63000eb35bda512cf37a51c9f8b805fff not found: ID does not exist" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.958583 4886 scope.go:117] "RemoveContainer" containerID="2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749" Jan 29 16:26:14 crc kubenswrapper[4886]: E0129 16:26:14.958868 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\": container with ID starting with 2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749 not found: ID does not exist" containerID="2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.958891 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749"} err="failed to get container status \"2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\": rpc error: code = NotFound desc = could not find container \"2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749\": container with ID starting with 2aaea10d8ea0e36361380eb0c535a3fdc5b51d62e499adcbc5d57558b58e8749 not found: ID does not exist" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.958927 4886 scope.go:117] "RemoveContainer" containerID="ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981" Jan 29 16:26:14 crc kubenswrapper[4886]: E0129 16:26:14.959301 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\": container with ID starting with ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981 not found: ID does not exist" containerID="ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.959415 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981"} err="failed to get container status \"ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\": rpc error: code = NotFound desc = could not find container \"ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981\": container with ID starting with ad6238fc03a0e7aa722791bda44bbaeca8a7269580529a4dd5d62cf0d1e39981 not found: ID does not exist" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.959500 4886 scope.go:117] "RemoveContainer" containerID="b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc" Jan 29 16:26:14 crc kubenswrapper[4886]: E0129 16:26:14.959843 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\": container with ID starting with b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc not found: ID does not exist" containerID="b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.959878 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc"} err="failed to get container status \"b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\": rpc error: code = NotFound desc = could not find container \"b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc\": container with ID starting with b9f3a2de52a936816a5d1e98920861b324b9980bf8a60336caab039ebbd563cc not found: ID does not exist" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.959899 4886 scope.go:117] "RemoveContainer" containerID="92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08" Jan 29 16:26:14 crc kubenswrapper[4886]: E0129 16:26:14.960149 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\": container with ID starting with 92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08 not found: ID does not exist" containerID="92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08" Jan 29 16:26:14 crc kubenswrapper[4886]: I0129 16:26:14.960174 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08"} err="failed to get container status \"92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\": rpc error: code = NotFound desc = could not find container \"92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08\": container with ID starting with 92a0f5389357492bf461db75ffb1ced7fa106c160b16e7e701f99f90a0c8fb08 not found: ID does not exist" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.205023 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.205770 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.206271 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.206716 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.207097 4886 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:15 crc kubenswrapper[4886]: I0129 16:26:15.207140 4886 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.207470 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="200ms" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.408035 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="400ms" Jan 29 16:26:15 crc kubenswrapper[4886]: E0129 16:26:15.809718 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="800ms" Jan 29 16:26:16 crc kubenswrapper[4886]: E0129 16:26:16.610158 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="1.6s" Jan 29 16:26:18 crc kubenswrapper[4886]: E0129 16:26:18.211106 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="3.2s" Jan 29 16:26:18 crc kubenswrapper[4886]: I0129 16:26:18.619528 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:18 crc kubenswrapper[4886]: I0129 16:26:18.620853 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:21 crc kubenswrapper[4886]: E0129 16:26:21.412037 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.174:6443: connect: connection refused" interval="6.4s" Jan 29 16:26:22 crc kubenswrapper[4886]: E0129 16:26:22.440099 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f4062ef2fd167 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:26:12.243755367 +0000 UTC m=+255.152474659,LastTimestamp:2026-01-29 16:26:12.243755367 +0000 UTC m=+255.152474659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.614294 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.615628 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.616381 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.634989 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.635062 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:25 crc kubenswrapper[4886]: E0129 16:26:25.635913 4886 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.636471 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.933471 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"576bc78e4b76f0ff28a3f03c5d234ce586e9d3fb6eb00dbb7c575ad0144179c4"} Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.937047 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.937109 4886 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08" exitCode=1 Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.937145 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08"} Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.937710 4886 scope.go:117] "RemoveContainer" containerID="a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.937974 4886 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.938511 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:25 crc kubenswrapper[4886]: I0129 16:26:25.939017 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.680530 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.947661 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.948130 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c54ca3c104e6bbe0325be1c3777b09d70215a073d7aa15018d297a353e4dbc6"} Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.949235 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.949991 4886 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.950534 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.950698 4886 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8dad781d4af802765bf506aa6cadb462999deeecf1dcbd5cb3f76ab9caeebeb9" exitCode=0 Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.950754 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8dad781d4af802765bf506aa6cadb462999deeecf1dcbd5cb3f76ab9caeebeb9"} Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.951099 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.951135 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:26 crc kubenswrapper[4886]: E0129 16:26:26.951587 4886 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.951638 4886 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.952220 4886 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:26 crc kubenswrapper[4886]: I0129 16:26:26.952705 4886 status_manager.go:851] "Failed to get status for pod" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.174:6443: connect: connection refused" Jan 29 16:26:27 crc kubenswrapper[4886]: I0129 16:26:27.958864 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4287cf42783b6750073994717fd7d568e50f9da2a07db5b726e2f78e4c469e77"} Jan 29 16:26:27 crc kubenswrapper[4886]: I0129 16:26:27.958900 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"96b66c9fbf5375b57b3c97dec37f824e12eac08fed2f97956b22ec7cc45c44f4"} Jan 29 16:26:27 crc kubenswrapper[4886]: I0129 16:26:27.958910 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9850f529a11e352c6246aa0a71bfec5294fdf9c2bc6c8a9fe2aa9af6f6a37ee7"} Jan 29 16:26:28 crc kubenswrapper[4886]: I0129 16:26:28.966757 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b33b62cf638211f8d3d3f038f7e733b3cfe70aa3fa225f193239f1d4b3b96041"} Jan 29 16:26:28 crc kubenswrapper[4886]: I0129 16:26:28.966810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b93df2811c25caa29409580b2d942f36f17760a1726f973735a28c47c2a43b8"} Jan 29 16:26:28 crc kubenswrapper[4886]: I0129 16:26:28.966962 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:28 crc kubenswrapper[4886]: I0129 16:26:28.967040 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:28 crc kubenswrapper[4886]: I0129 16:26:28.967065 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:29 crc kubenswrapper[4886]: I0129 16:26:29.957656 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" podUID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" containerName="oauth-openshift" containerID="cri-o://8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a" gracePeriod=15 Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.297018 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.446888 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-serving-cert\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.446959 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-service-ca\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447000 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-dir\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447069 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-idp-0-file-data\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447094 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-cliconfig\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447130 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-ocp-branding-template\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447170 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-session\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447197 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-provider-selection\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447234 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-router-certs\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447274 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-error\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447301 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-trusted-ca-bundle\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447350 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-policies\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447381 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-login\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447419 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqjmr\" (UniqueName: \"kubernetes.io/projected/b947565b-6a14-4bbd-881e-e82c33ca3a3b-kube-api-access-hqjmr\") pod \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\" (UID: \"b947565b-6a14-4bbd-881e-e82c33ca3a3b\") " Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.447172 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.448999 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.449047 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.449469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.450551 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.455506 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.456015 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b947565b-6a14-4bbd-881e-e82c33ca3a3b-kube-api-access-hqjmr" (OuterVolumeSpecName: "kube-api-access-hqjmr") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "kube-api-access-hqjmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.457403 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.457565 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.458383 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.458897 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.459190 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.459509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.461224 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b947565b-6a14-4bbd-881e-e82c33ca3a3b" (UID: "b947565b-6a14-4bbd-881e-e82c33ca3a3b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548621 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548688 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548718 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548737 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548758 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548784 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548806 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548826 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjmr\" (UniqueName: \"kubernetes.io/projected/b947565b-6a14-4bbd-881e-e82c33ca3a3b-kube-api-access-hqjmr\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548844 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548857 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548873 4886 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b947565b-6a14-4bbd-881e-e82c33ca3a3b-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548893 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548911 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.548929 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b947565b-6a14-4bbd-881e-e82c33ca3a3b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.636921 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.637003 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.652097 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.994609 4886 generic.go:334] "Generic (PLEG): container finished" podID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" containerID="8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a" exitCode=0 Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.995249 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" event={"ID":"b947565b-6a14-4bbd-881e-e82c33ca3a3b","Type":"ContainerDied","Data":"8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a"} Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.995623 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" event={"ID":"b947565b-6a14-4bbd-881e-e82c33ca3a3b","Type":"ContainerDied","Data":"cb33ac24972d3d5dba165317920577129d54d60d3420d9aec798c5982a6dac0a"} Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.995834 4886 scope.go:117] "RemoveContainer" containerID="8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a" Jan 29 16:26:30 crc kubenswrapper[4886]: I0129 16:26:30.996322 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mpttg" Jan 29 16:26:31 crc kubenswrapper[4886]: I0129 16:26:31.023551 4886 scope.go:117] "RemoveContainer" containerID="8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a" Jan 29 16:26:31 crc kubenswrapper[4886]: E0129 16:26:31.024072 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a\": container with ID starting with 8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a not found: ID does not exist" containerID="8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a" Jan 29 16:26:31 crc kubenswrapper[4886]: I0129 16:26:31.024167 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a"} err="failed to get container status \"8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a\": rpc error: code = NotFound desc = could not find container \"8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a\": container with ID starting with 8bc0819e4d3779242ef0e41d51afff359c9061460b45623abee6c85c9020ca9a not found: ID does not exist" Jan 29 16:26:33 crc kubenswrapper[4886]: I0129 16:26:33.077673 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:26:33 crc kubenswrapper[4886]: I0129 16:26:33.077779 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:26:33 crc kubenswrapper[4886]: I0129 16:26:33.078644 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:26:33 crc kubenswrapper[4886]: I0129 16:26:33.978373 4886 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:34 crc kubenswrapper[4886]: I0129 16:26:34.015112 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:34 crc kubenswrapper[4886]: I0129 16:26:34.015148 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:34 crc kubenswrapper[4886]: I0129 16:26:34.018655 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:34 crc kubenswrapper[4886]: I0129 16:26:34.021156 4886 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e5a72ce7-a4db-4f3d-ba76-57bd63d6dba2" Jan 29 16:26:35 crc kubenswrapper[4886]: I0129 16:26:35.020385 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:35 crc kubenswrapper[4886]: I0129 16:26:35.020412 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:36 crc kubenswrapper[4886]: I0129 16:26:36.680493 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:26:38 crc kubenswrapper[4886]: I0129 16:26:38.643503 4886 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e5a72ce7-a4db-4f3d-ba76-57bd63d6dba2" Jan 29 16:26:43 crc kubenswrapper[4886]: I0129 16:26:43.019819 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:26:43 crc kubenswrapper[4886]: I0129 16:26:43.078676 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:26:43 crc kubenswrapper[4886]: I0129 16:26:43.078743 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:26:43 crc kubenswrapper[4886]: I0129 16:26:43.812541 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:26:44 crc kubenswrapper[4886]: I0129 16:26:44.098373 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:26:44 crc kubenswrapper[4886]: I0129 16:26:44.453658 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:26:44 crc kubenswrapper[4886]: I0129 16:26:44.468015 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:26:44 crc kubenswrapper[4886]: I0129 16:26:44.535866 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:26:44 crc kubenswrapper[4886]: I0129 16:26:44.890707 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:26:44 crc kubenswrapper[4886]: I0129 16:26:44.906868 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.005390 4886 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.097250 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.251465 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.653704 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.700260 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.765113 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:26:45 crc kubenswrapper[4886]: I0129 16:26:45.861661 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.338378 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.480753 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.647143 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.666892 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.705681 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.764489 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:26:46 crc kubenswrapper[4886]: I0129 16:26:46.831305 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.165321 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.198691 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.259292 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.266289 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.365322 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.462277 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.675205 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.704025 4886 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.902445 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.960393 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.968875 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:26:47 crc kubenswrapper[4886]: I0129 16:26:47.969947 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.065863 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.106964 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.120250 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.241238 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.289573 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.302057 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.493352 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.573495 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.633461 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.657793 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.786187 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.876399 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.876597 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.927597 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.965518 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:26:48 crc kubenswrapper[4886]: I0129 16:26:48.970822 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.103272 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.121828 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.128556 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.148975 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.170996 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.174456 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.182532 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.184423 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.377591 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.405710 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.437028 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.479591 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.498731 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.548994 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.623445 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.653796 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.677887 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.685870 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.689886 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.744994 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.887305 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:26:49 crc kubenswrapper[4886]: I0129 16:26:49.914322 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.057850 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.127241 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.128997 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.140300 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.152435 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.167360 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.186284 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.248785 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.303431 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.385617 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.388015 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.502524 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.669402 4886 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.691942 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.725852 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.840632 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.841722 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.876713 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.904678 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.927372 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:26:50 crc kubenswrapper[4886]: I0129 16:26:50.959497 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.002818 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.151461 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.154792 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.199265 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.297950 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.332622 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.362147 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.413371 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.415962 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.420859 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.427049 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.540839 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.580818 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.614978 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.702399 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.709604 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.864093 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.869209 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:26:51 crc kubenswrapper[4886]: I0129 16:26:51.972416 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.041740 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.071159 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.106879 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.169627 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.179611 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.188782 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.192150 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.195294 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.285505 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.328841 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.415663 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.419464 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.553021 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.577980 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.769712 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:26:52 crc kubenswrapper[4886]: I0129 16:26:52.856230 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.014917 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.035373 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.059558 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.077954 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.078050 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.078122 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.079128 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"4c54ca3c104e6bbe0325be1c3777b09d70215a073d7aa15018d297a353e4dbc6"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.079442 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://4c54ca3c104e6bbe0325be1c3777b09d70215a073d7aa15018d297a353e4dbc6" gracePeriod=30 Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.119950 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.162968 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.196429 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.390459 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.439093 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.462554 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.466750 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.620250 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.630669 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.749033 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.807965 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.859520 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:26:53 crc kubenswrapper[4886]: I0129 16:26:53.868616 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:53.999923 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.016240 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.100495 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.129086 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.133668 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.230798 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.237597 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.282202 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.293168 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.317482 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.357408 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.380988 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.391991 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.438317 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.499833 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.512225 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.526350 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.533979 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.673659 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.706353 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.706381 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.713942 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.724810 4886 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.863169 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:26:54 crc kubenswrapper[4886]: I0129 16:26:54.966908 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.003169 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.024163 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.042427 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.047878 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.131066 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.160883 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.230677 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.247755 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.288644 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.339230 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.399580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.461151 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.543917 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.611590 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.733799 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.747720 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.771708 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.844163 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.882224 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.923935 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:26:55 crc kubenswrapper[4886]: I0129 16:26:55.950053 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.099265 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.136880 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.215174 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.264097 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.404140 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.463051 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.545780 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.558807 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.699022 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.703230 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.754311 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.883074 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.953886 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.983212 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:26:56 crc kubenswrapper[4886]: I0129 16:26:56.999616 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.053145 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.433875 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.726017 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.820563 4886 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.827158 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.851366 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:26:57 crc kubenswrapper[4886]: I0129 16:26:57.952659 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.022135 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.113772 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.127453 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.208705 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.315779 4886 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.374041 4886 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.381745 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.381709368 podStartE2EDuration="47.381709368s" podCreationTimestamp="2026-01-29 16:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:26:33.653702989 +0000 UTC m=+276.562422261" watchObservedRunningTime="2026-01-29 16:26:58.381709368 +0000 UTC m=+301.290428690" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.386370 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-mpttg"] Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.386449 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:26:58 crc kubenswrapper[4886]: E0129 16:26:58.386725 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" containerName="oauth-openshift" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.386977 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" containerName="oauth-openshift" Jan 29 16:26:58 crc kubenswrapper[4886]: E0129 16:26:58.387040 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" containerName="installer" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.387062 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" containerName="installer" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.387413 4886 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.387449 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9630c976-1bbd-4f14-b4c7-fc0436ca3705" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.388740 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" containerName="oauth-openshift" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.388796 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9027a6d8-0cac-4276-b722-08c3a99c6cf9" containerName="installer" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.389857 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.395588 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.395825 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.395760 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.396580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.396834 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.397909 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.398517 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.398745 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.399027 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.399578 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.399595 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.404271 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.404620 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.422091 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.422160 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.429590 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.453782 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.453761451 podStartE2EDuration="25.453761451s" podCreationTimestamp="2026-01-29 16:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:26:58.442808599 +0000 UTC m=+301.351527971" watchObservedRunningTime="2026-01-29 16:26:58.453761451 +0000 UTC m=+301.362480733" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.490216 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.494517 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.498142 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515224 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515246 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515261 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869nb\" (UniqueName: \"kubernetes.io/projected/92af746d-c60d-46a4-9be0-0ad28882ac0e-kube-api-access-869nb\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515282 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515515 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515591 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515641 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515670 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515758 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.515841 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.595081 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618190 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618489 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618519 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618563 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618586 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618607 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869nb\" (UniqueName: \"kubernetes.io/projected/92af746d-c60d-46a4-9be0-0ad28882ac0e-kube-api-access-869nb\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618639 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618662 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618684 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618718 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618736 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618758 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.618969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.619845 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.622063 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.622918 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.622915 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.623384 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.623447 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.623575 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.623695 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.625394 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.626977 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.627240 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b947565b-6a14-4bbd-881e-e82c33ca3a3b" path="/var/lib/kubelet/pods/b947565b-6a14-4bbd-881e-e82c33ca3a3b/volumes" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.630373 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.630606 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.631295 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.635258 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.635878 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.636041 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.636980 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.638944 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.639254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.639827 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.639956 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.639968 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.643880 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.650589 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.651993 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.653259 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.662003 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.677400 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869nb\" (UniqueName: \"kubernetes.io/projected/92af746d-c60d-46a4-9be0-0ad28882ac0e-kube-api-access-869nb\") pod \"oauth-openshift-9fbfc7dc4-r9gqg\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.735949 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.744821 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.747593 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.784539 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:26:58 crc kubenswrapper[4886]: I0129 16:26:58.814193 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.100530 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.177797 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg"] Jan 29 16:26:59 crc kubenswrapper[4886]: W0129 16:26:59.183607 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92af746d_c60d_46a4_9be0_0ad28882ac0e.slice/crio-14141aff9fbd287a70454765b395ba76ef2991c8de80ea1c92111cb0e0c784c3 WatchSource:0}: Error finding container 14141aff9fbd287a70454765b395ba76ef2991c8de80ea1c92111cb0e0c784c3: Status 404 returned error can't find the container with id 14141aff9fbd287a70454765b395ba76ef2991c8de80ea1c92111cb0e0c784c3 Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.193124 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.253180 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.648173 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.673127 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.754744 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.848368 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:26:59 crc kubenswrapper[4886]: I0129 16:26:59.859408 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:27:00 crc kubenswrapper[4886]: I0129 16:27:00.175053 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" event={"ID":"92af746d-c60d-46a4-9be0-0ad28882ac0e","Type":"ContainerStarted","Data":"47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e"} Jan 29 16:27:00 crc kubenswrapper[4886]: I0129 16:27:00.175131 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" event={"ID":"92af746d-c60d-46a4-9be0-0ad28882ac0e","Type":"ContainerStarted","Data":"14141aff9fbd287a70454765b395ba76ef2991c8de80ea1c92111cb0e0c784c3"} Jan 29 16:27:00 crc kubenswrapper[4886]: I0129 16:27:00.199470 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" podStartSLOduration=56.19944665 podStartE2EDuration="56.19944665s" podCreationTimestamp="2026-01-29 16:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:00.198964385 +0000 UTC m=+303.107683657" watchObservedRunningTime="2026-01-29 16:27:00.19944665 +0000 UTC m=+303.108165962" Jan 29 16:27:00 crc kubenswrapper[4886]: I0129 16:27:00.278452 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:27:01 crc kubenswrapper[4886]: I0129 16:27:01.182132 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:27:01 crc kubenswrapper[4886]: I0129 16:27:01.191808 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:27:07 crc kubenswrapper[4886]: I0129 16:27:07.739129 4886 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:27:07 crc kubenswrapper[4886]: I0129 16:27:07.739843 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e338e481af24aecd5ce5485aecf3d5729c1fbb23b68efbbc211fd833fc6aa1fa" gracePeriod=5 Jan 29 16:27:09 crc kubenswrapper[4886]: I0129 16:27:09.096961 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.002600 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.277557 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.277611 4886 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e338e481af24aecd5ce5485aecf3d5729c1fbb23b68efbbc211fd833fc6aa1fa" exitCode=137 Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.325103 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.325206 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448016 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448114 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448137 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448167 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448207 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448225 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448399 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448452 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448429 4886 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.448265 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.459060 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.549690 4886 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.549726 4886 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.549739 4886 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:13 crc kubenswrapper[4886]: I0129 16:27:13.549754 4886 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.284950 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.285269 4886 scope.go:117] "RemoveContainer" containerID="e338e481af24aecd5ce5485aecf3d5729c1fbb23b68efbbc211fd833fc6aa1fa" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.285398 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.623651 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.624038 4886 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.642290 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.642386 4886 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d3430f3b-6a12-4358-ba18-177e3d6eeb69" Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.649408 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:27:14 crc kubenswrapper[4886]: I0129 16:27:14.649492 4886 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d3430f3b-6a12-4358-ba18-177e3d6eeb69" Jan 29 16:27:23 crc kubenswrapper[4886]: I0129 16:27:23.354035 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:27:23 crc kubenswrapper[4886]: I0129 16:27:23.358234 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:27:23 crc kubenswrapper[4886]: I0129 16:27:23.358312 4886 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4c54ca3c104e6bbe0325be1c3777b09d70215a073d7aa15018d297a353e4dbc6" exitCode=137 Jan 29 16:27:23 crc kubenswrapper[4886]: I0129 16:27:23.358391 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4c54ca3c104e6bbe0325be1c3777b09d70215a073d7aa15018d297a353e4dbc6"} Jan 29 16:27:23 crc kubenswrapper[4886]: I0129 16:27:23.358444 4886 scope.go:117] "RemoveContainer" containerID="a370948657cae25c181170bc42e45d896e01469cb4079ad6ed412210527edb08" Jan 29 16:27:24 crc kubenswrapper[4886]: I0129 16:27:24.367036 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:27:24 crc kubenswrapper[4886]: I0129 16:27:24.369179 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28ed2d1f0f1eb97b92ecd5ed5ed65125b784ec21e7527d142ec869a0c7b7cfa0"} Jan 29 16:27:26 crc kubenswrapper[4886]: I0129 16:27:26.680721 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:27:33 crc kubenswrapper[4886]: I0129 16:27:33.078159 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:27:33 crc kubenswrapper[4886]: I0129 16:27:33.086550 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:27:33 crc kubenswrapper[4886]: I0129 16:27:33.433993 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:27:39 crc kubenswrapper[4886]: I0129 16:27:39.479416 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:27:42 crc kubenswrapper[4886]: I0129 16:27:42.884925 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4rg2h"] Jan 29 16:27:42 crc kubenswrapper[4886]: I0129 16:27:42.885566 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" podUID="4d5118e4-db44-4e09-a04d-2036e251936b" containerName="controller-manager" containerID="cri-o://074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec" gracePeriod=30 Jan 29 16:27:42 crc kubenswrapper[4886]: I0129 16:27:42.890759 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9"] Jan 29 16:27:42 crc kubenswrapper[4886]: I0129 16:27:42.891085 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" podUID="eb068b0a-4b6b-48b7-bae4-ab193394f299" containerName="route-controller-manager" containerID="cri-o://bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca" gracePeriod=30 Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.311870 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.320440 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455388 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-config\") pod \"4d5118e4-db44-4e09-a04d-2036e251936b\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455433 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-proxy-ca-bundles\") pod \"4d5118e4-db44-4e09-a04d-2036e251936b\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455464 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-config\") pod \"eb068b0a-4b6b-48b7-bae4-ab193394f299\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455500 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-client-ca\") pod \"eb068b0a-4b6b-48b7-bae4-ab193394f299\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455552 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5118e4-db44-4e09-a04d-2036e251936b-serving-cert\") pod \"4d5118e4-db44-4e09-a04d-2036e251936b\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455575 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-client-ca\") pod \"4d5118e4-db44-4e09-a04d-2036e251936b\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455605 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jkf\" (UniqueName: \"kubernetes.io/projected/4d5118e4-db44-4e09-a04d-2036e251936b-kube-api-access-44jkf\") pod \"4d5118e4-db44-4e09-a04d-2036e251936b\" (UID: \"4d5118e4-db44-4e09-a04d-2036e251936b\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8pr\" (UniqueName: \"kubernetes.io/projected/eb068b0a-4b6b-48b7-bae4-ab193394f299-kube-api-access-6h8pr\") pod \"eb068b0a-4b6b-48b7-bae4-ab193394f299\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.455647 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb068b0a-4b6b-48b7-bae4-ab193394f299-serving-cert\") pod \"eb068b0a-4b6b-48b7-bae4-ab193394f299\" (UID: \"eb068b0a-4b6b-48b7-bae4-ab193394f299\") " Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.456976 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb068b0a-4b6b-48b7-bae4-ab193394f299" (UID: "eb068b0a-4b6b-48b7-bae4-ab193394f299"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.457161 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-config" (OuterVolumeSpecName: "config") pod "eb068b0a-4b6b-48b7-bae4-ab193394f299" (UID: "eb068b0a-4b6b-48b7-bae4-ab193394f299"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.457225 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d5118e4-db44-4e09-a04d-2036e251936b" (UID: "4d5118e4-db44-4e09-a04d-2036e251936b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.457242 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4d5118e4-db44-4e09-a04d-2036e251936b" (UID: "4d5118e4-db44-4e09-a04d-2036e251936b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.457400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-config" (OuterVolumeSpecName: "config") pod "4d5118e4-db44-4e09-a04d-2036e251936b" (UID: "4d5118e4-db44-4e09-a04d-2036e251936b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.465774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb068b0a-4b6b-48b7-bae4-ab193394f299-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb068b0a-4b6b-48b7-bae4-ab193394f299" (UID: "eb068b0a-4b6b-48b7-bae4-ab193394f299"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.465857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5118e4-db44-4e09-a04d-2036e251936b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d5118e4-db44-4e09-a04d-2036e251936b" (UID: "4d5118e4-db44-4e09-a04d-2036e251936b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.466100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5118e4-db44-4e09-a04d-2036e251936b-kube-api-access-44jkf" (OuterVolumeSpecName: "kube-api-access-44jkf") pod "4d5118e4-db44-4e09-a04d-2036e251936b" (UID: "4d5118e4-db44-4e09-a04d-2036e251936b"). InnerVolumeSpecName "kube-api-access-44jkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.471894 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb068b0a-4b6b-48b7-bae4-ab193394f299-kube-api-access-6h8pr" (OuterVolumeSpecName: "kube-api-access-6h8pr") pod "eb068b0a-4b6b-48b7-bae4-ab193394f299" (UID: "eb068b0a-4b6b-48b7-bae4-ab193394f299"). InnerVolumeSpecName "kube-api-access-6h8pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.493565 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb068b0a-4b6b-48b7-bae4-ab193394f299" containerID="bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca" exitCode=0 Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.493677 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.498415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" event={"ID":"eb068b0a-4b6b-48b7-bae4-ab193394f299","Type":"ContainerDied","Data":"bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca"} Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.498475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9" event={"ID":"eb068b0a-4b6b-48b7-bae4-ab193394f299","Type":"ContainerDied","Data":"5b391d085c08e1c1dfac270a21f6cff67072029830c3d61c34b03a6c51728f7e"} Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.498493 4886 scope.go:117] "RemoveContainer" containerID="bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.505870 4886 generic.go:334] "Generic (PLEG): container finished" podID="4d5118e4-db44-4e09-a04d-2036e251936b" containerID="074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec" exitCode=0 Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.505914 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" event={"ID":"4d5118e4-db44-4e09-a04d-2036e251936b","Type":"ContainerDied","Data":"074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec"} Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.505946 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" event={"ID":"4d5118e4-db44-4e09-a04d-2036e251936b","Type":"ContainerDied","Data":"6fff8a070d1d246b9de78c2701294ccd82667531237f5c020ada5028f01e8438"} Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.506004 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4rg2h" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.529802 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9"] Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.530434 4886 scope.go:117] "RemoveContainer" containerID="bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca" Jan 29 16:27:43 crc kubenswrapper[4886]: E0129 16:27:43.530985 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca\": container with ID starting with bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca not found: ID does not exist" containerID="bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.531039 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca"} err="failed to get container status \"bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca\": rpc error: code = NotFound desc = could not find container \"bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca\": container with ID starting with bf056c7b64d1db40a273e61237f21df213f55de77057daa8d3f79b233f6b1bca not found: ID does not exist" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.531070 4886 scope.go:117] "RemoveContainer" containerID="074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.535317 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h57m9"] Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.541723 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4rg2h"] Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.546108 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4rg2h"] Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.548657 4886 scope.go:117] "RemoveContainer" containerID="074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec" Jan 29 16:27:43 crc kubenswrapper[4886]: E0129 16:27:43.549119 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec\": container with ID starting with 074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec not found: ID does not exist" containerID="074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.549152 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec"} err="failed to get container status \"074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec\": rpc error: code = NotFound desc = could not find container \"074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec\": container with ID starting with 074bdcd69e5d52baa3572c419d1d23725c2153e656e43405d65063d3d379a2ec not found: ID does not exist" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.556931 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5118e4-db44-4e09-a04d-2036e251936b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.556968 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.556983 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jkf\" (UniqueName: \"kubernetes.io/projected/4d5118e4-db44-4e09-a04d-2036e251936b-kube-api-access-44jkf\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.556998 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8pr\" (UniqueName: \"kubernetes.io/projected/eb068b0a-4b6b-48b7-bae4-ab193394f299-kube-api-access-6h8pr\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.557010 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb068b0a-4b6b-48b7-bae4-ab193394f299-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.557021 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.557032 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d5118e4-db44-4e09-a04d-2036e251936b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.557042 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:43 crc kubenswrapper[4886]: I0129 16:27:43.557053 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb068b0a-4b6b-48b7-bae4-ab193394f299-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.621853 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5118e4-db44-4e09-a04d-2036e251936b" path="/var/lib/kubelet/pods/4d5118e4-db44-4e09-a04d-2036e251936b/volumes" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.623012 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb068b0a-4b6b-48b7-bae4-ab193394f299" path="/var/lib/kubelet/pods/eb068b0a-4b6b-48b7-bae4-ab193394f299/volumes" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863160 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-559577448b-qljqw"] Jan 29 16:27:44 crc kubenswrapper[4886]: E0129 16:27:44.863486 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5118e4-db44-4e09-a04d-2036e251936b" containerName="controller-manager" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863505 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5118e4-db44-4e09-a04d-2036e251936b" containerName="controller-manager" Jan 29 16:27:44 crc kubenswrapper[4886]: E0129 16:27:44.863517 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb068b0a-4b6b-48b7-bae4-ab193394f299" containerName="route-controller-manager" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863525 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb068b0a-4b6b-48b7-bae4-ab193394f299" containerName="route-controller-manager" Jan 29 16:27:44 crc kubenswrapper[4886]: E0129 16:27:44.863535 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863766 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863911 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863925 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb068b0a-4b6b-48b7-bae4-ab193394f299" containerName="route-controller-manager" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.863938 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5118e4-db44-4e09-a04d-2036e251936b" containerName="controller-manager" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.864450 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.865993 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.866179 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.866190 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49"] Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.866477 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.866599 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.866703 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.866772 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.868568 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.868623 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.868745 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.868817 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.869064 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.869176 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.869589 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.916797 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.930230 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49"] Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.937606 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-559577448b-qljqw"] Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972551 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-client-ca\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972609 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d01d62e5-f921-4e41-8744-23c91bf9310a-client-ca\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972648 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-config\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972685 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrn5\" (UniqueName: \"kubernetes.io/projected/d01d62e5-f921-4e41-8744-23c91bf9310a-kube-api-access-prrn5\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972715 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01d62e5-f921-4e41-8744-23c91bf9310a-serving-cert\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972743 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqww\" (UniqueName: \"kubernetes.io/projected/e7b68f8a-9483-479e-bf2d-441dff994e02-kube-api-access-sbqww\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.972764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b68f8a-9483-479e-bf2d-441dff994e02-serving-cert\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.973090 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-proxy-ca-bundles\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:44 crc kubenswrapper[4886]: I0129 16:27:44.973234 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d01d62e5-f921-4e41-8744-23c91bf9310a-config\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.074774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d01d62e5-f921-4e41-8744-23c91bf9310a-client-ca\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.074851 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-config\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.074883 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrn5\" (UniqueName: \"kubernetes.io/projected/d01d62e5-f921-4e41-8744-23c91bf9310a-kube-api-access-prrn5\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.074923 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01d62e5-f921-4e41-8744-23c91bf9310a-serving-cert\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.074941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqww\" (UniqueName: \"kubernetes.io/projected/e7b68f8a-9483-479e-bf2d-441dff994e02-kube-api-access-sbqww\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.074958 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b68f8a-9483-479e-bf2d-441dff994e02-serving-cert\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.075001 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-proxy-ca-bundles\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.075125 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d01d62e5-f921-4e41-8744-23c91bf9310a-config\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.075169 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-client-ca\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.076574 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d01d62e5-f921-4e41-8744-23c91bf9310a-client-ca\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.076804 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-proxy-ca-bundles\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.076894 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d01d62e5-f921-4e41-8744-23c91bf9310a-config\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.076902 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-config\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.078252 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-client-ca\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.088501 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01d62e5-f921-4e41-8744-23c91bf9310a-serving-cert\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.089943 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b68f8a-9483-479e-bf2d-441dff994e02-serving-cert\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.091109 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrn5\" (UniqueName: \"kubernetes.io/projected/d01d62e5-f921-4e41-8744-23c91bf9310a-kube-api-access-prrn5\") pod \"route-controller-manager-5dcd866c4c-tng49\" (UID: \"d01d62e5-f921-4e41-8744-23c91bf9310a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.092445 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqww\" (UniqueName: \"kubernetes.io/projected/e7b68f8a-9483-479e-bf2d-441dff994e02-kube-api-access-sbqww\") pod \"controller-manager-559577448b-qljqw\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.235451 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.248890 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.641171 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49"] Jan 29 16:27:45 crc kubenswrapper[4886]: W0129 16:27:45.646791 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01d62e5_f921_4e41_8744_23c91bf9310a.slice/crio-286af561bcdf63922bcd5294e28424bd5e44bed8924f37cc13287ce7fc2c6adc WatchSource:0}: Error finding container 286af561bcdf63922bcd5294e28424bd5e44bed8924f37cc13287ce7fc2c6adc: Status 404 returned error can't find the container with id 286af561bcdf63922bcd5294e28424bd5e44bed8924f37cc13287ce7fc2c6adc Jan 29 16:27:45 crc kubenswrapper[4886]: I0129 16:27:45.694948 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-559577448b-qljqw"] Jan 29 16:27:45 crc kubenswrapper[4886]: W0129 16:27:45.703130 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b68f8a_9483_479e_bf2d_441dff994e02.slice/crio-61013901f79515c510fd797b6e9c94166fd6b2d802a9282570c4f90aaedd5f07 WatchSource:0}: Error finding container 61013901f79515c510fd797b6e9c94166fd6b2d802a9282570c4f90aaedd5f07: Status 404 returned error can't find the container with id 61013901f79515c510fd797b6e9c94166fd6b2d802a9282570c4f90aaedd5f07 Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.527367 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" event={"ID":"e7b68f8a-9483-479e-bf2d-441dff994e02","Type":"ContainerStarted","Data":"1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb"} Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.527702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" event={"ID":"e7b68f8a-9483-479e-bf2d-441dff994e02","Type":"ContainerStarted","Data":"61013901f79515c510fd797b6e9c94166fd6b2d802a9282570c4f90aaedd5f07"} Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.527720 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.528743 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" event={"ID":"d01d62e5-f921-4e41-8744-23c91bf9310a","Type":"ContainerStarted","Data":"c5d1a86fa5476a1471825e4a1459b1da433b49876ffb5250f488558bb19e09ec"} Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.528774 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" event={"ID":"d01d62e5-f921-4e41-8744-23c91bf9310a","Type":"ContainerStarted","Data":"286af561bcdf63922bcd5294e28424bd5e44bed8924f37cc13287ce7fc2c6adc"} Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.528955 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.531421 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.535684 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.560208 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" podStartSLOduration=4.560189992 podStartE2EDuration="4.560189992s" podCreationTimestamp="2026-01-29 16:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:46.545238557 +0000 UTC m=+349.453957849" watchObservedRunningTime="2026-01-29 16:27:46.560189992 +0000 UTC m=+349.468909264" Jan 29 16:27:46 crc kubenswrapper[4886]: I0129 16:27:46.575275 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcd866c4c-tng49" podStartSLOduration=4.57525874 podStartE2EDuration="4.57525874s" podCreationTimestamp="2026-01-29 16:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:46.574379074 +0000 UTC m=+349.483098356" watchObservedRunningTime="2026-01-29 16:27:46.57525874 +0000 UTC m=+349.483978012" Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.830657 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcj6l"] Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.831300 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcj6l" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="registry-server" containerID="cri-o://bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b" gracePeriod=30 Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.847498 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cj9vs"] Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.847824 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cj9vs" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="registry-server" containerID="cri-o://adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b" gracePeriod=30 Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.861200 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bm4"] Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.861586 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" containerID="cri-o://fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22" gracePeriod=30 Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.868424 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzc5s"] Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.868675 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xzc5s" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="registry-server" containerID="cri-o://233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f" gracePeriod=30 Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.872493 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtk7r"] Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.873760 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.880475 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hph6"] Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.880860 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6hph6" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="registry-server" containerID="cri-o://9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc" gracePeriod=30 Jan 29 16:27:47 crc kubenswrapper[4886]: I0129 16:27:47.893642 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtk7r"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.021036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.021316 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzm6k\" (UniqueName: \"kubernetes.io/projected/42b8dc70-b29d-4995-9727-9b8e032bdad9-kube-api-access-pzm6k\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.021379 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.122283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.122397 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.122438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzm6k\" (UniqueName: \"kubernetes.io/projected/42b8dc70-b29d-4995-9727-9b8e032bdad9-kube-api-access-pzm6k\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.123988 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.143804 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzm6k\" (UniqueName: \"kubernetes.io/projected/42b8dc70-b29d-4995-9727-9b8e032bdad9-kube-api-access-pzm6k\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.146635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qtk7r\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.282406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.300060 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.379603 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.396860 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.401953 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.428676 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6qn\" (UniqueName: \"kubernetes.io/projected/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-kube-api-access-xn6qn\") pod \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.428989 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-utilities\") pod \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.429028 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-catalog-content\") pod \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\" (UID: \"047adc93-cb46-4ba7-bbdf-4d485a08ea6b\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.432955 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-utilities" (OuterVolumeSpecName: "utilities") pod "047adc93-cb46-4ba7-bbdf-4d485a08ea6b" (UID: "047adc93-cb46-4ba7-bbdf-4d485a08ea6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.439217 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-kube-api-access-xn6qn" (OuterVolumeSpecName: "kube-api-access-xn6qn") pod "047adc93-cb46-4ba7-bbdf-4d485a08ea6b" (UID: "047adc93-cb46-4ba7-bbdf-4d485a08ea6b"). InnerVolumeSpecName "kube-api-access-xn6qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.445724 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.509250 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "047adc93-cb46-4ba7-bbdf-4d485a08ea6b" (UID: "047adc93-cb46-4ba7-bbdf-4d485a08ea6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530426 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-catalog-content\") pod \"434ccaea-8a30-4a97-8908-64bc9f550de0\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530510 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-utilities\") pod \"434ccaea-8a30-4a97-8908-64bc9f550de0\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530572 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-operator-metrics\") pod \"17accc89-e860-4b12-b5b3-3da7adaa3430\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530605 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gjgt\" (UniqueName: \"kubernetes.io/projected/434ccaea-8a30-4a97-8908-64bc9f550de0-kube-api-access-4gjgt\") pod \"434ccaea-8a30-4a97-8908-64bc9f550de0\" (UID: \"434ccaea-8a30-4a97-8908-64bc9f550de0\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530657 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8xv\" (UniqueName: \"kubernetes.io/projected/c36e6697-37b9-4b10-baea-0f9c92014c79-kube-api-access-qf8xv\") pod \"c36e6697-37b9-4b10-baea-0f9c92014c79\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbgjh\" (UniqueName: \"kubernetes.io/projected/17accc89-e860-4b12-b5b3-3da7adaa3430-kube-api-access-fbgjh\") pod \"17accc89-e860-4b12-b5b3-3da7adaa3430\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530750 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncm2\" (UniqueName: \"kubernetes.io/projected/d8a07d27-67fb-47e8-9032-e4f831983d75-kube-api-access-xncm2\") pod \"d8a07d27-67fb-47e8-9032-e4f831983d75\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530778 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-utilities\") pod \"c36e6697-37b9-4b10-baea-0f9c92014c79\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530840 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-catalog-content\") pod \"d8a07d27-67fb-47e8-9032-e4f831983d75\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.530864 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-catalog-content\") pod \"c36e6697-37b9-4b10-baea-0f9c92014c79\" (UID: \"c36e6697-37b9-4b10-baea-0f9c92014c79\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.531547 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-trusted-ca\") pod \"17accc89-e860-4b12-b5b3-3da7adaa3430\" (UID: \"17accc89-e860-4b12-b5b3-3da7adaa3430\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.531626 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-utilities\") pod \"d8a07d27-67fb-47e8-9032-e4f831983d75\" (UID: \"d8a07d27-67fb-47e8-9032-e4f831983d75\") " Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.531995 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6qn\" (UniqueName: \"kubernetes.io/projected/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-kube-api-access-xn6qn\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.532051 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.532067 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/047adc93-cb46-4ba7-bbdf-4d485a08ea6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.531465 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-utilities" (OuterVolumeSpecName: "utilities") pod "434ccaea-8a30-4a97-8908-64bc9f550de0" (UID: "434ccaea-8a30-4a97-8908-64bc9f550de0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.533051 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-utilities" (OuterVolumeSpecName: "utilities") pod "d8a07d27-67fb-47e8-9032-e4f831983d75" (UID: "d8a07d27-67fb-47e8-9032-e4f831983d75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.533430 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-utilities" (OuterVolumeSpecName: "utilities") pod "c36e6697-37b9-4b10-baea-0f9c92014c79" (UID: "c36e6697-37b9-4b10-baea-0f9c92014c79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.535483 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "17accc89-e860-4b12-b5b3-3da7adaa3430" (UID: "17accc89-e860-4b12-b5b3-3da7adaa3430"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.536087 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434ccaea-8a30-4a97-8908-64bc9f550de0-kube-api-access-4gjgt" (OuterVolumeSpecName: "kube-api-access-4gjgt") pod "434ccaea-8a30-4a97-8908-64bc9f550de0" (UID: "434ccaea-8a30-4a97-8908-64bc9f550de0"). InnerVolumeSpecName "kube-api-access-4gjgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.537219 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a07d27-67fb-47e8-9032-e4f831983d75-kube-api-access-xncm2" (OuterVolumeSpecName: "kube-api-access-xncm2") pod "d8a07d27-67fb-47e8-9032-e4f831983d75" (UID: "d8a07d27-67fb-47e8-9032-e4f831983d75"). InnerVolumeSpecName "kube-api-access-xncm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.538567 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36e6697-37b9-4b10-baea-0f9c92014c79-kube-api-access-qf8xv" (OuterVolumeSpecName: "kube-api-access-qf8xv") pod "c36e6697-37b9-4b10-baea-0f9c92014c79" (UID: "c36e6697-37b9-4b10-baea-0f9c92014c79"). InnerVolumeSpecName "kube-api-access-qf8xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.542123 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "17accc89-e860-4b12-b5b3-3da7adaa3430" (UID: "17accc89-e860-4b12-b5b3-3da7adaa3430"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.543275 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerID="233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f" exitCode=0 Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.543349 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzc5s" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.543391 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzc5s" event={"ID":"d8a07d27-67fb-47e8-9032-e4f831983d75","Type":"ContainerDied","Data":"233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.543449 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzc5s" event={"ID":"d8a07d27-67fb-47e8-9032-e4f831983d75","Type":"ContainerDied","Data":"8df354200569f756ef71068446371a43cfad097210faf33ea3e2d3966f2eb917"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.543498 4886 scope.go:117] "RemoveContainer" containerID="233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.552544 4886 generic.go:334] "Generic (PLEG): container finished" podID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerID="adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b" exitCode=0 Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.552690 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cj9vs" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.552727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerDied","Data":"adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.552764 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cj9vs" event={"ID":"434ccaea-8a30-4a97-8908-64bc9f550de0","Type":"ContainerDied","Data":"c930283727a8af009300e17c576da570a17d69226a2431e0b8f6442ab7a33682"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.553116 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17accc89-e860-4b12-b5b3-3da7adaa3430-kube-api-access-fbgjh" (OuterVolumeSpecName: "kube-api-access-fbgjh") pod "17accc89-e860-4b12-b5b3-3da7adaa3430" (UID: "17accc89-e860-4b12-b5b3-3da7adaa3430"). InnerVolumeSpecName "kube-api-access-fbgjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.555398 4886 generic.go:334] "Generic (PLEG): container finished" podID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerID="9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc" exitCode=0 Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.555448 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerDied","Data":"9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.555469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hph6" event={"ID":"c36e6697-37b9-4b10-baea-0f9c92014c79","Type":"ContainerDied","Data":"2597500a6782cab3fff1d1bf05e088755f933968f6726da1d1dcae802c73e7f3"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.555535 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hph6" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.564880 4886 generic.go:334] "Generic (PLEG): container finished" podID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerID="bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b" exitCode=0 Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.564930 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerDied","Data":"bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.564952 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcj6l" event={"ID":"047adc93-cb46-4ba7-bbdf-4d485a08ea6b","Type":"ContainerDied","Data":"b49a4641d27203a40e0f7e4f28f82c1063741221c6c208a86d4e1a5bc30f7000"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.565009 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcj6l" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.566525 4886 scope.go:117] "RemoveContainer" containerID="ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.568600 4886 generic.go:334] "Generic (PLEG): container finished" podID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerID="fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22" exitCode=0 Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.568704 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.568725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" event={"ID":"17accc89-e860-4b12-b5b3-3da7adaa3430","Type":"ContainerDied","Data":"fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.568776 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bm4" event={"ID":"17accc89-e860-4b12-b5b3-3da7adaa3430","Type":"ContainerDied","Data":"496e5ab4c79c2396e707c4fc94a4d2815e8f1572d6df45519acda3977888c122"} Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.576632 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8a07d27-67fb-47e8-9032-e4f831983d75" (UID: "d8a07d27-67fb-47e8-9032-e4f831983d75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.589265 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "434ccaea-8a30-4a97-8908-64bc9f550de0" (UID: "434ccaea-8a30-4a97-8908-64bc9f550de0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.590813 4886 scope.go:117] "RemoveContainer" containerID="3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.593758 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcj6l"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.600704 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcj6l"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.604510 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bm4"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.607145 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bm4"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.621305 4886 scope.go:117] "RemoveContainer" containerID="233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.621801 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f\": container with ID starting with 233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f not found: ID does not exist" containerID="233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.621835 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f"} err="failed to get container status \"233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f\": rpc error: code = NotFound desc = could not find container \"233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f\": container with ID starting with 233eefe83f891bb8ff6279b8ca319fdb899c0d7dc84bfe73ee251483fff54d0f not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.621858 4886 scope.go:117] "RemoveContainer" containerID="ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.622383 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153\": container with ID starting with ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153 not found: ID does not exist" containerID="ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.622419 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153"} err="failed to get container status \"ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153\": rpc error: code = NotFound desc = could not find container \"ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153\": container with ID starting with ceae5fdac3eed7f1c5974c445ed3419dbfa10feff4c8309145af3e9ea005f153 not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.622445 4886 scope.go:117] "RemoveContainer" containerID="3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.622658 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd\": container with ID starting with 3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd not found: ID does not exist" containerID="3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.622683 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd"} err="failed to get container status \"3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd\": rpc error: code = NotFound desc = could not find container \"3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd\": container with ID starting with 3fb3181dff0539237c77e3f3e6bfc2daf84ba731ba94f2127334c7ba90e867dd not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.622697 4886 scope.go:117] "RemoveContainer" containerID="adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.624749 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" path="/var/lib/kubelet/pods/047adc93-cb46-4ba7-bbdf-4d485a08ea6b/volumes" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.625452 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" path="/var/lib/kubelet/pods/17accc89-e860-4b12-b5b3-3da7adaa3430/volumes" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633306 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633397 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633410 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633421 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a07d27-67fb-47e8-9032-e4f831983d75-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633430 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633438 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/434ccaea-8a30-4a97-8908-64bc9f550de0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633446 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17accc89-e860-4b12-b5b3-3da7adaa3430-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633455 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gjgt\" (UniqueName: \"kubernetes.io/projected/434ccaea-8a30-4a97-8908-64bc9f550de0-kube-api-access-4gjgt\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633464 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf8xv\" (UniqueName: \"kubernetes.io/projected/c36e6697-37b9-4b10-baea-0f9c92014c79-kube-api-access-qf8xv\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633474 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbgjh\" (UniqueName: \"kubernetes.io/projected/17accc89-e860-4b12-b5b3-3da7adaa3430-kube-api-access-fbgjh\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.633484 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xncm2\" (UniqueName: \"kubernetes.io/projected/d8a07d27-67fb-47e8-9032-e4f831983d75-kube-api-access-xncm2\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.635733 4886 scope.go:117] "RemoveContainer" containerID="5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.652837 4886 scope.go:117] "RemoveContainer" containerID="9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.664197 4886 scope.go:117] "RemoveContainer" containerID="adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.664597 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b\": container with ID starting with adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b not found: ID does not exist" containerID="adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.664644 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b"} err="failed to get container status \"adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b\": rpc error: code = NotFound desc = could not find container \"adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b\": container with ID starting with adf2c14310b6a7ba403bcc63dd65fff6abbc7aa1ceb7c9a65b7e84de9cf1376b not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.664680 4886 scope.go:117] "RemoveContainer" containerID="5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.664944 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da\": container with ID starting with 5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da not found: ID does not exist" containerID="5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.665006 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da"} err="failed to get container status \"5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da\": rpc error: code = NotFound desc = could not find container \"5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da\": container with ID starting with 5848b4e5a6379779bfe01d51a16e2bc5ee511c62178bbd791e055867e63873da not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.665026 4886 scope.go:117] "RemoveContainer" containerID="9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.665248 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91\": container with ID starting with 9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91 not found: ID does not exist" containerID="9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.665276 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91"} err="failed to get container status \"9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91\": rpc error: code = NotFound desc = could not find container \"9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91\": container with ID starting with 9b90bb78250828a8de92c52ee575ca760465a8522cc7fc51c14297899de5ae91 not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.665291 4886 scope.go:117] "RemoveContainer" containerID="9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.676931 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c36e6697-37b9-4b10-baea-0f9c92014c79" (UID: "c36e6697-37b9-4b10-baea-0f9c92014c79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.688695 4886 scope.go:117] "RemoveContainer" containerID="7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.707130 4886 scope.go:117] "RemoveContainer" containerID="0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.721468 4886 scope.go:117] "RemoveContainer" containerID="9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.722154 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc\": container with ID starting with 9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc not found: ID does not exist" containerID="9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.722196 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc"} err="failed to get container status \"9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc\": rpc error: code = NotFound desc = could not find container \"9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc\": container with ID starting with 9d4035b0a0d02345b7ffc32586d2f6e1f50c9f460c46150e1796f4be0de2d1cc not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.722224 4886 scope.go:117] "RemoveContainer" containerID="7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.722593 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e\": container with ID starting with 7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e not found: ID does not exist" containerID="7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.722633 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e"} err="failed to get container status \"7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e\": rpc error: code = NotFound desc = could not find container \"7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e\": container with ID starting with 7344b3cddb96e29cffb588d3f380405658d001e938c3fd9a59f0d4c9ea5aa16e not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.722681 4886 scope.go:117] "RemoveContainer" containerID="0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.723972 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17\": container with ID starting with 0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17 not found: ID does not exist" containerID="0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.724018 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17"} err="failed to get container status \"0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17\": rpc error: code = NotFound desc = could not find container \"0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17\": container with ID starting with 0cdb18d5f5fa9a44559e46fd01c9effbb1ab6cf3c5ac5db03199ac60dda03f17 not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.724184 4886 scope.go:117] "RemoveContainer" containerID="bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.735257 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36e6697-37b9-4b10-baea-0f9c92014c79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.738021 4886 scope.go:117] "RemoveContainer" containerID="11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.803826 4886 scope.go:117] "RemoveContainer" containerID="587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.823113 4886 scope.go:117] "RemoveContainer" containerID="bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.825572 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b\": container with ID starting with bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b not found: ID does not exist" containerID="bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.825763 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b"} err="failed to get container status \"bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b\": rpc error: code = NotFound desc = could not find container \"bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b\": container with ID starting with bd7f7f68af6c019f5874ecc65bfcb6fd76594d7f15c29ffa88fbdeda070e9c5b not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.825884 4886 scope.go:117] "RemoveContainer" containerID="11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.826327 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf\": container with ID starting with 11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf not found: ID does not exist" containerID="11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.826380 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf"} err="failed to get container status \"11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf\": rpc error: code = NotFound desc = could not find container \"11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf\": container with ID starting with 11d0ed20cabb97cd96a252527a2f57cbc3a01707b987d53593bc18c03df398cf not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.826407 4886 scope.go:117] "RemoveContainer" containerID="587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.827582 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5\": container with ID starting with 587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5 not found: ID does not exist" containerID="587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.827624 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5"} err="failed to get container status \"587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5\": rpc error: code = NotFound desc = could not find container \"587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5\": container with ID starting with 587e95e478255c5ab7978918eda8a5869d425a31c3fad8525cf07ea38da482d5 not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.827650 4886 scope.go:117] "RemoveContainer" containerID="fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.827697 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtk7r"] Jan 29 16:27:48 crc kubenswrapper[4886]: W0129 16:27:48.833424 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b8dc70_b29d_4995_9727_9b8e032bdad9.slice/crio-648bc592f49ae3cedaf90d37922cbc1e1495121ad8e957f81f4908846b5e05da WatchSource:0}: Error finding container 648bc592f49ae3cedaf90d37922cbc1e1495121ad8e957f81f4908846b5e05da: Status 404 returned error can't find the container with id 648bc592f49ae3cedaf90d37922cbc1e1495121ad8e957f81f4908846b5e05da Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.849642 4886 scope.go:117] "RemoveContainer" containerID="fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22" Jan 29 16:27:48 crc kubenswrapper[4886]: E0129 16:27:48.850050 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22\": container with ID starting with fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22 not found: ID does not exist" containerID="fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.850114 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22"} err="failed to get container status \"fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22\": rpc error: code = NotFound desc = could not find container \"fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22\": container with ID starting with fd7fef5ae316b90316f06b6e489cce7174661acd1d0b44078f269a28b56f1f22 not found: ID does not exist" Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.868472 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzc5s"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.873870 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzc5s"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.880834 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cj9vs"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.886556 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cj9vs"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.902496 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hph6"] Jan 29 16:27:48 crc kubenswrapper[4886]: I0129 16:27:48.906210 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6hph6"] Jan 29 16:27:49 crc kubenswrapper[4886]: I0129 16:27:49.593862 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" event={"ID":"42b8dc70-b29d-4995-9727-9b8e032bdad9","Type":"ContainerStarted","Data":"f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e"} Jan 29 16:27:49 crc kubenswrapper[4886]: I0129 16:27:49.594172 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" event={"ID":"42b8dc70-b29d-4995-9727-9b8e032bdad9","Type":"ContainerStarted","Data":"648bc592f49ae3cedaf90d37922cbc1e1495121ad8e957f81f4908846b5e05da"} Jan 29 16:27:49 crc kubenswrapper[4886]: I0129 16:27:49.594193 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:49 crc kubenswrapper[4886]: I0129 16:27:49.599794 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:27:49 crc kubenswrapper[4886]: I0129 16:27:49.617147 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" podStartSLOduration=2.617014451 podStartE2EDuration="2.617014451s" podCreationTimestamp="2026-01-29 16:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:49.613156387 +0000 UTC m=+352.521875669" watchObservedRunningTime="2026-01-29 16:27:49.617014451 +0000 UTC m=+352.525733733" Jan 29 16:27:50 crc kubenswrapper[4886]: I0129 16:27:50.623012 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" path="/var/lib/kubelet/pods/434ccaea-8a30-4a97-8908-64bc9f550de0/volumes" Jan 29 16:27:50 crc kubenswrapper[4886]: I0129 16:27:50.623795 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" path="/var/lib/kubelet/pods/c36e6697-37b9-4b10-baea-0f9c92014c79/volumes" Jan 29 16:27:50 crc kubenswrapper[4886]: I0129 16:27:50.624487 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" path="/var/lib/kubelet/pods/d8a07d27-67fb-47e8-9032-e4f831983d75/volumes" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.262970 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfv6k"] Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263851 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263870 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263884 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263893 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263902 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263910 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263922 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263929 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263941 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263950 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263962 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263969 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.263981 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.263988 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.264002 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264010 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.264022 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264030 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.264039 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264047 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.264057 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264065 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="extract-utilities" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.264075 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264082 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" Jan 29 16:28:04 crc kubenswrapper[4886]: E0129 16:28:04.264097 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264105 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="extract-content" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264216 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a07d27-67fb-47e8-9032-e4f831983d75" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264229 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="17accc89-e860-4b12-b5b3-3da7adaa3430" containerName="marketplace-operator" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264241 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="047adc93-cb46-4ba7-bbdf-4d485a08ea6b" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264266 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="434ccaea-8a30-4a97-8908-64bc9f550de0" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.264276 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36e6697-37b9-4b10-baea-0f9c92014c79" containerName="registry-server" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.265245 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.318937 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.327618 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlnk\" (UniqueName: \"kubernetes.io/projected/69003a39-1c09-4087-a494-ebfd69e973cf-kube-api-access-5mlnk\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.327779 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-utilities\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.327908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-catalog-content\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.334027 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfv6k"] Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.428734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-catalog-content\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.428776 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mlnk\" (UniqueName: \"kubernetes.io/projected/69003a39-1c09-4087-a494-ebfd69e973cf-kube-api-access-5mlnk\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.428823 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-utilities\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.429299 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-utilities\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.430209 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-catalog-content\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.448201 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mlnk\" (UniqueName: \"kubernetes.io/projected/69003a39-1c09-4087-a494-ebfd69e973cf-kube-api-access-5mlnk\") pod \"certified-operators-jfv6k\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.648068 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.863845 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5hs7"] Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.865495 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.872213 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.874635 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5hs7"] Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.933894 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-utilities\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.933949 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-catalog-content\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:04 crc kubenswrapper[4886]: I0129 16:28:04.933969 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jsj\" (UniqueName: \"kubernetes.io/projected/a7325ad0-28bf-45e0-bbd5-160f441de091-kube-api-access-c8jsj\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.035522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-utilities\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.035670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-catalog-content\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.035703 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jsj\" (UniqueName: \"kubernetes.io/projected/a7325ad0-28bf-45e0-bbd5-160f441de091-kube-api-access-c8jsj\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.035960 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-utilities\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.036176 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-catalog-content\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.054564 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jsj\" (UniqueName: \"kubernetes.io/projected/a7325ad0-28bf-45e0-bbd5-160f441de091-kube-api-access-c8jsj\") pod \"community-operators-q5hs7\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.063868 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfv6k"] Jan 29 16:28:05 crc kubenswrapper[4886]: W0129 16:28:05.071063 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69003a39_1c09_4087_a494_ebfd69e973cf.slice/crio-e4d88167fe4815cd042b435714fee0326b8557c7e5fb2b46e9557a042ac995f8 WatchSource:0}: Error finding container e4d88167fe4815cd042b435714fee0326b8557c7e5fb2b46e9557a042ac995f8: Status 404 returned error can't find the container with id e4d88167fe4815cd042b435714fee0326b8557c7e5fb2b46e9557a042ac995f8 Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.191874 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.569104 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5hs7"] Jan 29 16:28:05 crc kubenswrapper[4886]: W0129 16:28:05.578472 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7325ad0_28bf_45e0_bbd5_160f441de091.slice/crio-58e358a0eb4540bb049b243d60b0ba858eec19efdffef34538e1bbcdff0edbc6 WatchSource:0}: Error finding container 58e358a0eb4540bb049b243d60b0ba858eec19efdffef34538e1bbcdff0edbc6: Status 404 returned error can't find the container with id 58e358a0eb4540bb049b243d60b0ba858eec19efdffef34538e1bbcdff0edbc6 Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.684945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerStarted","Data":"58e358a0eb4540bb049b243d60b0ba858eec19efdffef34538e1bbcdff0edbc6"} Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.686977 4886 generic.go:334] "Generic (PLEG): container finished" podID="69003a39-1c09-4087-a494-ebfd69e973cf" containerID="9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647" exitCode=0 Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.687039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfv6k" event={"ID":"69003a39-1c09-4087-a494-ebfd69e973cf","Type":"ContainerDied","Data":"9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647"} Jan 29 16:28:05 crc kubenswrapper[4886]: I0129 16:28:05.687078 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfv6k" event={"ID":"69003a39-1c09-4087-a494-ebfd69e973cf","Type":"ContainerStarted","Data":"e4d88167fe4815cd042b435714fee0326b8557c7e5fb2b46e9557a042ac995f8"} Jan 29 16:28:05 crc kubenswrapper[4886]: E0129 16:28:05.814257 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:28:05 crc kubenswrapper[4886]: E0129 16:28:05.814405 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mlnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jfv6k_openshift-marketplace(69003a39-1c09-4087-a494-ebfd69e973cf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:05 crc kubenswrapper[4886]: E0129 16:28:05.815595 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.666680 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qbl4"] Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.668606 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.672101 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qbl4"] Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.672286 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.704749 4886 generic.go:334] "Generic (PLEG): container finished" podID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerID="bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d" exitCode=0 Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.704847 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerDied","Data":"bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d"} Jan 29 16:28:06 crc kubenswrapper[4886]: E0129 16:28:06.706651 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:28:06 crc kubenswrapper[4886]: E0129 16:28:06.835958 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:28:06 crc kubenswrapper[4886]: E0129 16:28:06.836161 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q5hs7_openshift-marketplace(a7325ad0-28bf-45e0-bbd5-160f441de091): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:06 crc kubenswrapper[4886]: E0129 16:28:06.837768 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.856721 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7sq\" (UniqueName: \"kubernetes.io/projected/57aa9115-b2d5-45aa-8ac3-e251c0907e45-kube-api-access-vf7sq\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.856767 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-catalog-content\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.856812 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-utilities\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.958153 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-catalog-content\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.958248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-utilities\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.958309 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7sq\" (UniqueName: \"kubernetes.io/projected/57aa9115-b2d5-45aa-8ac3-e251c0907e45-kube-api-access-vf7sq\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.959254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-catalog-content\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.959538 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-utilities\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:06 crc kubenswrapper[4886]: I0129 16:28:06.981650 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7sq\" (UniqueName: \"kubernetes.io/projected/57aa9115-b2d5-45aa-8ac3-e251c0907e45-kube-api-access-vf7sq\") pod \"redhat-marketplace-4qbl4\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.016508 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.263089 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zkk68"] Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.264434 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.266360 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.270861 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkk68"] Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.363530 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-kube-api-access-vn92n\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.363616 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-catalog-content\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.363672 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-utilities\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.440309 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qbl4"] Jan 29 16:28:07 crc kubenswrapper[4886]: W0129 16:28:07.455044 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57aa9115_b2d5_45aa_8ac3_e251c0907e45.slice/crio-68d81ee76eccd615ba9046c4c1e6648df9ef22ce6eee6d566d9309dd619e6010 WatchSource:0}: Error finding container 68d81ee76eccd615ba9046c4c1e6648df9ef22ce6eee6d566d9309dd619e6010: Status 404 returned error can't find the container with id 68d81ee76eccd615ba9046c4c1e6648df9ef22ce6eee6d566d9309dd619e6010 Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.465130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-kube-api-access-vn92n\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.465194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-catalog-content\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.465225 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-utilities\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.465693 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-catalog-content\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.465748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-utilities\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.493415 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-kube-api-access-vn92n\") pod \"redhat-operators-zkk68\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.582203 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.713596 4886 generic.go:334] "Generic (PLEG): container finished" podID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerID="9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1" exitCode=0 Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.713643 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerDied","Data":"9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1"} Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.713688 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerStarted","Data":"68d81ee76eccd615ba9046c4c1e6648df9ef22ce6eee6d566d9309dd619e6010"} Jan 29 16:28:07 crc kubenswrapper[4886]: E0129 16:28:07.715284 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:28:07 crc kubenswrapper[4886]: E0129 16:28:07.843424 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:28:07 crc kubenswrapper[4886]: E0129 16:28:07.843630 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vf7sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4qbl4_openshift-marketplace(57aa9115-b2d5-45aa-8ac3-e251c0907e45): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:07 crc kubenswrapper[4886]: E0129 16:28:07.844833 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:28:07 crc kubenswrapper[4886]: I0129 16:28:07.970624 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkk68"] Jan 29 16:28:07 crc kubenswrapper[4886]: W0129 16:28:07.988491 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84ce3e9_c41a_4a08_8d86_2a918d5e9450.slice/crio-1de9e48715ad861e4d8bd78cecc12c2dcf52cdf92d4274338ddeebf931d7420d WatchSource:0}: Error finding container 1de9e48715ad861e4d8bd78cecc12c2dcf52cdf92d4274338ddeebf931d7420d: Status 404 returned error can't find the container with id 1de9e48715ad861e4d8bd78cecc12c2dcf52cdf92d4274338ddeebf931d7420d Jan 29 16:28:08 crc kubenswrapper[4886]: I0129 16:28:08.719473 4886 generic.go:334] "Generic (PLEG): container finished" podID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerID="9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6" exitCode=0 Jan 29 16:28:08 crc kubenswrapper[4886]: I0129 16:28:08.719536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkk68" event={"ID":"d84ce3e9-c41a-4a08-8d86-2a918d5e9450","Type":"ContainerDied","Data":"9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6"} Jan 29 16:28:08 crc kubenswrapper[4886]: I0129 16:28:08.719586 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkk68" event={"ID":"d84ce3e9-c41a-4a08-8d86-2a918d5e9450","Type":"ContainerStarted","Data":"1de9e48715ad861e4d8bd78cecc12c2dcf52cdf92d4274338ddeebf931d7420d"} Jan 29 16:28:08 crc kubenswrapper[4886]: E0129 16:28:08.721137 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:28:08 crc kubenswrapper[4886]: E0129 16:28:08.849458 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:28:08 crc kubenswrapper[4886]: E0129 16:28:08.853851 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn92n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkk68_openshift-marketplace(d84ce3e9-c41a-4a08-8d86-2a918d5e9450): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:08 crc kubenswrapper[4886]: E0129 16:28:08.855644 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:28:09 crc kubenswrapper[4886]: E0129 16:28:09.726299 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:28:11 crc kubenswrapper[4886]: I0129 16:28:11.648433 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-559577448b-qljqw"] Jan 29 16:28:11 crc kubenswrapper[4886]: I0129 16:28:11.649022 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" podUID="e7b68f8a-9483-479e-bf2d-441dff994e02" containerName="controller-manager" containerID="cri-o://1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb" gracePeriod=30 Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.142802 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.223957 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-client-ca\") pod \"e7b68f8a-9483-479e-bf2d-441dff994e02\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.224002 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbqww\" (UniqueName: \"kubernetes.io/projected/e7b68f8a-9483-479e-bf2d-441dff994e02-kube-api-access-sbqww\") pod \"e7b68f8a-9483-479e-bf2d-441dff994e02\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.224026 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-config\") pod \"e7b68f8a-9483-479e-bf2d-441dff994e02\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.225362 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-client-ca" (OuterVolumeSpecName: "client-ca") pod "e7b68f8a-9483-479e-bf2d-441dff994e02" (UID: "e7b68f8a-9483-479e-bf2d-441dff994e02"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.225426 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-config" (OuterVolumeSpecName: "config") pod "e7b68f8a-9483-479e-bf2d-441dff994e02" (UID: "e7b68f8a-9483-479e-bf2d-441dff994e02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.230238 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b68f8a-9483-479e-bf2d-441dff994e02-kube-api-access-sbqww" (OuterVolumeSpecName: "kube-api-access-sbqww") pod "e7b68f8a-9483-479e-bf2d-441dff994e02" (UID: "e7b68f8a-9483-479e-bf2d-441dff994e02"). InnerVolumeSpecName "kube-api-access-sbqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.324759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-proxy-ca-bundles\") pod \"e7b68f8a-9483-479e-bf2d-441dff994e02\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.324934 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b68f8a-9483-479e-bf2d-441dff994e02-serving-cert\") pod \"e7b68f8a-9483-479e-bf2d-441dff994e02\" (UID: \"e7b68f8a-9483-479e-bf2d-441dff994e02\") " Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.325424 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbqww\" (UniqueName: \"kubernetes.io/projected/e7b68f8a-9483-479e-bf2d-441dff994e02-kube-api-access-sbqww\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.325422 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e7b68f8a-9483-479e-bf2d-441dff994e02" (UID: "e7b68f8a-9483-479e-bf2d-441dff994e02"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.325483 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.325508 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.329220 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b68f8a-9483-479e-bf2d-441dff994e02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7b68f8a-9483-479e-bf2d-441dff994e02" (UID: "e7b68f8a-9483-479e-bf2d-441dff994e02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.425965 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7b68f8a-9483-479e-bf2d-441dff994e02-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.426015 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b68f8a-9483-479e-bf2d-441dff994e02-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.738898 4886 generic.go:334] "Generic (PLEG): container finished" podID="e7b68f8a-9483-479e-bf2d-441dff994e02" containerID="1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb" exitCode=0 Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.738941 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" event={"ID":"e7b68f8a-9483-479e-bf2d-441dff994e02","Type":"ContainerDied","Data":"1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb"} Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.738972 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" event={"ID":"e7b68f8a-9483-479e-bf2d-441dff994e02","Type":"ContainerDied","Data":"61013901f79515c510fd797b6e9c94166fd6b2d802a9282570c4f90aaedd5f07"} Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.738991 4886 scope.go:117] "RemoveContainer" containerID="1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.739104 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559577448b-qljqw" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.755910 4886 scope.go:117] "RemoveContainer" containerID="1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb" Jan 29 16:28:12 crc kubenswrapper[4886]: E0129 16:28:12.756519 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb\": container with ID starting with 1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb not found: ID does not exist" containerID="1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.756547 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb"} err="failed to get container status \"1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb\": rpc error: code = NotFound desc = could not find container \"1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb\": container with ID starting with 1baf76b04c25852c14f6eddaeefa7479b2d32f63cecc26a393263dba5b8aedfb not found: ID does not exist" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.762938 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-559577448b-qljqw"] Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.771349 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-559577448b-qljqw"] Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.882035 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c58fc677-rq8vv"] Jan 29 16:28:12 crc kubenswrapper[4886]: E0129 16:28:12.882505 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b68f8a-9483-479e-bf2d-441dff994e02" containerName="controller-manager" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.882518 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b68f8a-9483-479e-bf2d-441dff994e02" containerName="controller-manager" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.882611 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b68f8a-9483-479e-bf2d-441dff994e02" containerName="controller-manager" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.882942 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.884462 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.885213 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.885236 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.885290 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.886722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.886811 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.892878 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c58fc677-rq8vv"] Jan 29 16:28:12 crc kubenswrapper[4886]: I0129 16:28:12.898120 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.033439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2j4h\" (UniqueName: \"kubernetes.io/projected/f13f8975-f61d-4cf6-8a08-76e4427efada-kube-api-access-j2j4h\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.033578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-client-ca\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.033628 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-config\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.033663 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-proxy-ca-bundles\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.033842 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13f8975-f61d-4cf6-8a08-76e4427efada-serving-cert\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.135193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13f8975-f61d-4cf6-8a08-76e4427efada-serving-cert\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.135263 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2j4h\" (UniqueName: \"kubernetes.io/projected/f13f8975-f61d-4cf6-8a08-76e4427efada-kube-api-access-j2j4h\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.135292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-client-ca\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.135339 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-config\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.135361 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-proxy-ca-bundles\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.136492 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-proxy-ca-bundles\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.136882 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-client-ca\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.136989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13f8975-f61d-4cf6-8a08-76e4427efada-config\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.138478 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13f8975-f61d-4cf6-8a08-76e4427efada-serving-cert\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.151362 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2j4h\" (UniqueName: \"kubernetes.io/projected/f13f8975-f61d-4cf6-8a08-76e4427efada-kube-api-access-j2j4h\") pod \"controller-manager-c58fc677-rq8vv\" (UID: \"f13f8975-f61d-4cf6-8a08-76e4427efada\") " pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.200964 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.371880 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c58fc677-rq8vv"] Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.746013 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" event={"ID":"f13f8975-f61d-4cf6-8a08-76e4427efada","Type":"ContainerStarted","Data":"a07266500e9f0b537705d2ac1e2e398e522bbc0519fdd50045f683924f5f7c8a"} Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.746299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" event={"ID":"f13f8975-f61d-4cf6-8a08-76e4427efada","Type":"ContainerStarted","Data":"46802b4bcfebe0b5f4e58a06ce68253a5640e5675bb511d193c95ea139fd61d0"} Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.746733 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.757730 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" Jan 29 16:28:13 crc kubenswrapper[4886]: I0129 16:28:13.798940 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c58fc677-rq8vv" podStartSLOduration=2.798921026 podStartE2EDuration="2.798921026s" podCreationTimestamp="2026-01-29 16:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:13.783683193 +0000 UTC m=+376.692402465" watchObservedRunningTime="2026-01-29 16:28:13.798921026 +0000 UTC m=+376.707640298" Jan 29 16:28:14 crc kubenswrapper[4886]: I0129 16:28:14.621138 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b68f8a-9483-479e-bf2d-441dff994e02" path="/var/lib/kubelet/pods/e7b68f8a-9483-479e-bf2d-441dff994e02/volumes" Jan 29 16:28:18 crc kubenswrapper[4886]: I0129 16:28:18.511929 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg"] Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.033195 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm"] Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.034792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.041835 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm"] Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.043883 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.044842 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.045873 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.045988 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.045988 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.112109 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dd4249-1e33-4000-8cf8-94db106891dc-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.112164 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a3dd4249-1e33-4000-8cf8-94db106891dc-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.112195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrd6t\" (UniqueName: \"kubernetes.io/projected/a3dd4249-1e33-4000-8cf8-94db106891dc-kube-api-access-zrd6t\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.213033 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dd4249-1e33-4000-8cf8-94db106891dc-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.213080 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a3dd4249-1e33-4000-8cf8-94db106891dc-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.213107 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrd6t\" (UniqueName: \"kubernetes.io/projected/a3dd4249-1e33-4000-8cf8-94db106891dc-kube-api-access-zrd6t\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.214114 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a3dd4249-1e33-4000-8cf8-94db106891dc-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.223046 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dd4249-1e33-4000-8cf8-94db106891dc-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.241009 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrd6t\" (UniqueName: \"kubernetes.io/projected/a3dd4249-1e33-4000-8cf8-94db106891dc-kube-api-access-zrd6t\") pod \"cluster-monitoring-operator-6d5b84845-9stfm\" (UID: \"a3dd4249-1e33-4000-8cf8-94db106891dc\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.394425 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" Jan 29 16:28:19 crc kubenswrapper[4886]: E0129 16:28:19.743562 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:28:19 crc kubenswrapper[4886]: E0129 16:28:19.743996 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mlnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jfv6k_openshift-marketplace(69003a39-1c09-4087-a494-ebfd69e973cf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:19 crc kubenswrapper[4886]: E0129 16:28:19.745108 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:28:19 crc kubenswrapper[4886]: I0129 16:28:19.795137 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm"] Jan 29 16:28:19 crc kubenswrapper[4886]: W0129 16:28:19.804748 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3dd4249_1e33_4000_8cf8_94db106891dc.slice/crio-48b8b1abf01ec742aef11fe0a5a8be90c6afc747ffbf4755afea7e1865e560d3 WatchSource:0}: Error finding container 48b8b1abf01ec742aef11fe0a5a8be90c6afc747ffbf4755afea7e1865e560d3: Status 404 returned error can't find the container with id 48b8b1abf01ec742aef11fe0a5a8be90c6afc747ffbf4755afea7e1865e560d3 Jan 29 16:28:20 crc kubenswrapper[4886]: I0129 16:28:20.783195 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" event={"ID":"a3dd4249-1e33-4000-8cf8-94db106891dc","Type":"ContainerStarted","Data":"48b8b1abf01ec742aef11fe0a5a8be90c6afc747ffbf4755afea7e1865e560d3"} Jan 29 16:28:21 crc kubenswrapper[4886]: E0129 16:28:21.752395 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:28:21 crc kubenswrapper[4886]: E0129 16:28:21.752919 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q5hs7_openshift-marketplace(a7325ad0-28bf-45e0-bbd5-160f441de091): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:21 crc kubenswrapper[4886]: E0129 16:28:21.754377 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:28:21 crc kubenswrapper[4886]: E0129 16:28:21.765120 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:28:21 crc kubenswrapper[4886]: E0129 16:28:21.765316 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn92n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkk68_openshift-marketplace(d84ce3e9-c41a-4a08-8d86-2a918d5e9450): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:21 crc kubenswrapper[4886]: E0129 16:28:21.766888 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:28:21 crc kubenswrapper[4886]: I0129 16:28:21.789155 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" event={"ID":"a3dd4249-1e33-4000-8cf8-94db106891dc","Type":"ContainerStarted","Data":"40462d71fb2894f86ea4404c51bffe0c125791dad00e65d87f460a224575d876"} Jan 29 16:28:21 crc kubenswrapper[4886]: I0129 16:28:21.810630 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9stfm" podStartSLOduration=1.038990985 podStartE2EDuration="2.810608639s" podCreationTimestamp="2026-01-29 16:28:19 +0000 UTC" firstStartedPulling="2026-01-29 16:28:19.80785382 +0000 UTC m=+382.716573092" lastFinishedPulling="2026-01-29 16:28:21.579471474 +0000 UTC m=+384.488190746" observedRunningTime="2026-01-29 16:28:21.804020943 +0000 UTC m=+384.712740225" watchObservedRunningTime="2026-01-29 16:28:21.810608639 +0000 UTC m=+384.719327911" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.170271 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d"] Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.171080 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.172913 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.172985 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-7bjfn" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.179056 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d"] Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.251263 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c050db1b-3854-406d-8cc5-fc997e9a1abe-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dfl4d\" (UID: \"c050db1b-3854-406d-8cc5-fc997e9a1abe\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.352758 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c050db1b-3854-406d-8cc5-fc997e9a1abe-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dfl4d\" (UID: \"c050db1b-3854-406d-8cc5-fc997e9a1abe\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.361479 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c050db1b-3854-406d-8cc5-fc997e9a1abe-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dfl4d\" (UID: \"c050db1b-3854-406d-8cc5-fc997e9a1abe\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.485674 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:22 crc kubenswrapper[4886]: E0129 16:28:22.789977 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:28:22 crc kubenswrapper[4886]: E0129 16:28:22.790737 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vf7sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4qbl4_openshift-marketplace(57aa9115-b2d5-45aa-8ac3-e251c0907e45): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:22 crc kubenswrapper[4886]: E0129 16:28:22.791914 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:28:22 crc kubenswrapper[4886]: I0129 16:28:22.896986 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d"] Jan 29 16:28:23 crc kubenswrapper[4886]: I0129 16:28:23.806055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" event={"ID":"c050db1b-3854-406d-8cc5-fc997e9a1abe","Type":"ContainerStarted","Data":"8455383e60539f36cbaa22b285d5315c40a7997e6100f72dc2fa08d6ee382658"} Jan 29 16:28:24 crc kubenswrapper[4886]: I0129 16:28:24.812198 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" event={"ID":"c050db1b-3854-406d-8cc5-fc997e9a1abe","Type":"ContainerStarted","Data":"252860898c2683bc1c12338582f088908b61f2309f0b69654c33a383f7fa819b"} Jan 29 16:28:24 crc kubenswrapper[4886]: I0129 16:28:24.812584 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:24 crc kubenswrapper[4886]: I0129 16:28:24.817384 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" Jan 29 16:28:24 crc kubenswrapper[4886]: I0129 16:28:24.827993 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dfl4d" podStartSLOduration=1.136545306 podStartE2EDuration="2.827978405s" podCreationTimestamp="2026-01-29 16:28:22 +0000 UTC" firstStartedPulling="2026-01-29 16:28:22.906964178 +0000 UTC m=+385.815683460" lastFinishedPulling="2026-01-29 16:28:24.598397287 +0000 UTC m=+387.507116559" observedRunningTime="2026-01-29 16:28:24.827837561 +0000 UTC m=+387.736556833" watchObservedRunningTime="2026-01-29 16:28:24.827978405 +0000 UTC m=+387.736697677" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.241833 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g77js"] Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.242636 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.244035 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bpjmt" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.245204 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.245249 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.245518 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.259276 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g77js"] Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.401877 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/672614ef-138a-405e-a615-b56724368e8f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.402034 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvt8\" (UniqueName: \"kubernetes.io/projected/672614ef-138a-405e-a615-b56724368e8f-kube-api-access-5lvt8\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.402097 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/672614ef-138a-405e-a615-b56724368e8f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.402147 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/672614ef-138a-405e-a615-b56724368e8f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.503471 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/672614ef-138a-405e-a615-b56724368e8f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.503654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/672614ef-138a-405e-a615-b56724368e8f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.503771 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvt8\" (UniqueName: \"kubernetes.io/projected/672614ef-138a-405e-a615-b56724368e8f-kube-api-access-5lvt8\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.503848 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/672614ef-138a-405e-a615-b56724368e8f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.505282 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/672614ef-138a-405e-a615-b56724368e8f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.511734 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/672614ef-138a-405e-a615-b56724368e8f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.515441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/672614ef-138a-405e-a615-b56724368e8f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.527132 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvt8\" (UniqueName: \"kubernetes.io/projected/672614ef-138a-405e-a615-b56724368e8f-kube-api-access-5lvt8\") pod \"prometheus-operator-db54df47d-g77js\" (UID: \"672614ef-138a-405e-a615-b56724368e8f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.557380 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" Jan 29 16:28:25 crc kubenswrapper[4886]: I0129 16:28:25.964872 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g77js"] Jan 29 16:28:26 crc kubenswrapper[4886]: I0129 16:28:26.822712 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" event={"ID":"672614ef-138a-405e-a615-b56724368e8f","Type":"ContainerStarted","Data":"0ee607a1132785eb0b57178d86676b176f64d1ebd9cc2429a153eeeac5628f4e"} Jan 29 16:28:27 crc kubenswrapper[4886]: I0129 16:28:27.830123 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" event={"ID":"672614ef-138a-405e-a615-b56724368e8f","Type":"ContainerStarted","Data":"1a669f284167fd42f2cf77fd3bad2013a7bf2323b79ec9e9b09f83ee67918217"} Jan 29 16:28:28 crc kubenswrapper[4886]: I0129 16:28:28.836513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" event={"ID":"672614ef-138a-405e-a615-b56724368e8f","Type":"ContainerStarted","Data":"3bb35df2f59865df8f660eda08260051716912f6ac3e8c1839863b657a15182b"} Jan 29 16:28:28 crc kubenswrapper[4886]: I0129 16:28:28.862670 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-g77js" podStartSLOduration=2.244039582 podStartE2EDuration="3.86264173s" podCreationTimestamp="2026-01-29 16:28:25 +0000 UTC" firstStartedPulling="2026-01-29 16:28:25.972795466 +0000 UTC m=+388.881514738" lastFinishedPulling="2026-01-29 16:28:27.591397604 +0000 UTC m=+390.500116886" observedRunningTime="2026-01-29 16:28:28.858379101 +0000 UTC m=+391.767098383" watchObservedRunningTime="2026-01-29 16:28:28.86264173 +0000 UTC m=+391.771361062" Jan 29 16:28:29 crc kubenswrapper[4886]: I0129 16:28:29.661140 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:28:29 crc kubenswrapper[4886]: I0129 16:28:29.661227 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.551864 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-w4847"] Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.554270 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.566531 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x"] Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.566828 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.567066 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dxqr2" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.567557 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.568815 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-w4847"] Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.570215 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.570659 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.570967 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.572059 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-v7f66" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.580908 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.585048 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x"] Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.603541 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-tsz6m"] Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.604642 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.610651 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.610819 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.610928 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-5rgqh" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.674318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktqq\" (UniqueName: \"kubernetes.io/projected/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-api-access-qktqq\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.674588 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5d78538-806d-458c-ae3c-4ac03596fe18-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.674689 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.674773 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fc5b733-9271-4576-b06b-f6bece792d8a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.674847 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d78538-806d-458c-ae3c-4ac03596fe18-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.674941 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.675057 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8fc5b733-9271-4576-b06b-f6bece792d8a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.675143 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8fc5b733-9271-4576-b06b-f6bece792d8a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.675223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.675317 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4mw4\" (UniqueName: \"kubernetes.io/projected/8fc5b733-9271-4576-b06b-f6bece792d8a-kube-api-access-d4mw4\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.776883 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777144 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-textfile\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777293 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86cef950-d7b4-468c-bb9f-e71a98ffe676-metrics-client-ca\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777449 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fc5b733-9271-4576-b06b-f6bece792d8a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d78538-806d-458c-ae3c-4ac03596fe18-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777752 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-wtmp\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777867 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.777973 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd4c9\" (UniqueName: \"kubernetes.io/projected/86cef950-d7b4-468c-bb9f-e71a98ffe676-kube-api-access-vd4c9\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778078 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778163 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-sys\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8fc5b733-9271-4576-b06b-f6bece792d8a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-root\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-tls\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8fc5b733-9271-4576-b06b-f6bece792d8a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4mw4\" (UniqueName: \"kubernetes.io/projected/8fc5b733-9271-4576-b06b-f6bece792d8a-kube-api-access-d4mw4\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778915 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qktqq\" (UniqueName: \"kubernetes.io/projected/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-api-access-qktqq\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.779028 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5d78538-806d-458c-ae3c-4ac03596fe18-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.779049 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d78538-806d-458c-ae3c-4ac03596fe18-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.778379 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fc5b733-9271-4576-b06b-f6bece792d8a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.779569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5d78538-806d-458c-ae3c-4ac03596fe18-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.779753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.783764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8fc5b733-9271-4576-b06b-f6bece792d8a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.783921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8fc5b733-9271-4576-b06b-f6bece792d8a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.784564 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.784568 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.797978 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4mw4\" (UniqueName: \"kubernetes.io/projected/8fc5b733-9271-4576-b06b-f6bece792d8a-kube-api-access-d4mw4\") pod \"openshift-state-metrics-566fddb674-w4847\" (UID: \"8fc5b733-9271-4576-b06b-f6bece792d8a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.799504 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktqq\" (UniqueName: \"kubernetes.io/projected/a5d78538-806d-458c-ae3c-4ac03596fe18-kube-api-access-qktqq\") pod \"kube-state-metrics-777cb5bd5d-28t5x\" (UID: \"a5d78538-806d-458c-ae3c-4ac03596fe18\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.880516 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.880887 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-textfile\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.880925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86cef950-d7b4-468c-bb9f-e71a98ffe676-metrics-client-ca\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.880950 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-wtmp\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.880967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd4c9\" (UniqueName: \"kubernetes.io/projected/86cef950-d7b4-468c-bb9f-e71a98ffe676-kube-api-access-vd4c9\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.880987 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-sys\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.881003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.881022 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-root\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.881042 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-tls\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.881341 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-root\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.881377 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-sys\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.881508 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-wtmp\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.882122 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-textfile\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.882462 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86cef950-d7b4-468c-bb9f-e71a98ffe676-metrics-client-ca\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.885969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-tls\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.886783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86cef950-d7b4-468c-bb9f-e71a98ffe676-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.895766 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.907298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd4c9\" (UniqueName: \"kubernetes.io/projected/86cef950-d7b4-468c-bb9f-e71a98ffe676-kube-api-access-vd4c9\") pod \"node-exporter-tsz6m\" (UID: \"86cef950-d7b4-468c-bb9f-e71a98ffe676\") " pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: I0129 16:28:30.935168 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-tsz6m" Jan 29 16:28:30 crc kubenswrapper[4886]: W0129 16:28:30.959781 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86cef950_d7b4_468c_bb9f_e71a98ffe676.slice/crio-f2ccfa8ffd6d77641522959a801d9126e80a9c79315d6bf26f7ce89ec7e4b511 WatchSource:0}: Error finding container f2ccfa8ffd6d77641522959a801d9126e80a9c79315d6bf26f7ce89ec7e4b511: Status 404 returned error can't find the container with id f2ccfa8ffd6d77641522959a801d9126e80a9c79315d6bf26f7ce89ec7e4b511 Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.286186 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-w4847"] Jan 29 16:28:31 crc kubenswrapper[4886]: W0129 16:28:31.292444 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc5b733_9271_4576_b06b_f6bece792d8a.slice/crio-35335d098d1e6004276f92ee90f008ab46cdd56e260d8b8c5af8ae31745dec40 WatchSource:0}: Error finding container 35335d098d1e6004276f92ee90f008ab46cdd56e260d8b8c5af8ae31745dec40: Status 404 returned error can't find the container with id 35335d098d1e6004276f92ee90f008ab46cdd56e260d8b8c5af8ae31745dec40 Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.365721 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x"] Jan 29 16:28:31 crc kubenswrapper[4886]: W0129 16:28:31.369255 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d78538_806d_458c_ae3c_4ac03596fe18.slice/crio-037e2449384b22b4b812bba703eed3b9414e27a7f858d877a1204e0f2a303e0b WatchSource:0}: Error finding container 037e2449384b22b4b812bba703eed3b9414e27a7f858d877a1204e0f2a303e0b: Status 404 returned error can't find the container with id 037e2449384b22b4b812bba703eed3b9414e27a7f858d877a1204e0f2a303e0b Jan 29 16:28:31 crc kubenswrapper[4886]: E0129 16:28:31.617433 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.665162 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.667401 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.680596 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.680636 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.680674 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-chnnp" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.680684 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.683538 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.683629 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.684361 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.684561 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690343 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690625 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690651 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-config-out\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690685 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690705 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690786 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690860 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-config-volume\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690902 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.690926 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h4ql\" (UniqueName: \"kubernetes.io/projected/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-kube-api-access-7h4ql\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.691164 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-web-config\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.691191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.691210 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.691230 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.691254 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.791988 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-config-out\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792109 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792140 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792164 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792188 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-config-volume\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792236 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h4ql\" (UniqueName: \"kubernetes.io/projected/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-kube-api-access-7h4ql\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792306 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-web-config\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792353 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792379 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.792412 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.796813 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.797225 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.798159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.798162 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-config-out\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.798272 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.799386 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.799750 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-web-config\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.802078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.802607 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-config-volume\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.803768 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.803982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.816699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h4ql\" (UniqueName: \"kubernetes.io/projected/43bcb21d-ccb0-474a-8a4b-20c4fd56904a-kube-api-access-7h4ql\") pod \"alertmanager-main-0\" (UID: \"43bcb21d-ccb0-474a-8a4b-20c4fd56904a\") " pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.857000 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tsz6m" event={"ID":"86cef950-d7b4-468c-bb9f-e71a98ffe676","Type":"ContainerStarted","Data":"f2ccfa8ffd6d77641522959a801d9126e80a9c79315d6bf26f7ce89ec7e4b511"} Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.860550 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" event={"ID":"8fc5b733-9271-4576-b06b-f6bece792d8a","Type":"ContainerStarted","Data":"c6c007ce7d14dad969f24874130472966dedd8ff15d70b4ce278565fb9dacdc4"} Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.860604 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" event={"ID":"8fc5b733-9271-4576-b06b-f6bece792d8a","Type":"ContainerStarted","Data":"e6e7efec676ad7f430a5071703c136559743f22c02db645476e614c951695f5d"} Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.860618 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" event={"ID":"8fc5b733-9271-4576-b06b-f6bece792d8a","Type":"ContainerStarted","Data":"35335d098d1e6004276f92ee90f008ab46cdd56e260d8b8c5af8ae31745dec40"} Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.861597 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" event={"ID":"a5d78538-806d-458c-ae3c-4ac03596fe18","Type":"ContainerStarted","Data":"037e2449384b22b4b812bba703eed3b9414e27a7f858d877a1204e0f2a303e0b"} Jan 29 16:28:31 crc kubenswrapper[4886]: I0129 16:28:31.984115 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.739880 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-645496d5c-x86sq"] Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.741800 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-645496d5c-x86sq"] Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.741884 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.748835 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7l1p2e2gr0th4" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.748857 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.749029 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.749402 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.749523 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.749754 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.749770 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-p77fn" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.798177 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 29 16:28:32 crc kubenswrapper[4886]: E0129 16:28:32.907645 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.930454 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-grpc-tls\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.930614 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-metrics-client-ca\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.930679 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.931927 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pfg\" (UniqueName: \"kubernetes.io/projected/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-kube-api-access-c8pfg\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.932026 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-tls\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.932136 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.932166 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:32 crc kubenswrapper[4886]: I0129 16:28:32.932226 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.033710 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-tls\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.033799 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.033846 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.033883 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.033962 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-grpc-tls\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.034174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-metrics-client-ca\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.034787 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.034816 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pfg\" (UniqueName: \"kubernetes.io/projected/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-kube-api-access-c8pfg\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.035054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-metrics-client-ca\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.039312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-grpc-tls\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.039502 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.039561 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.040145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.040474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-tls\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.048983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.063836 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pfg\" (UniqueName: \"kubernetes.io/projected/cf13b56e-deb1-4a2d-8d41-139db9eb5dbe-kube-api-access-c8pfg\") pod \"thanos-querier-645496d5c-x86sq\" (UID: \"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe\") " pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.066703 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:33 crc kubenswrapper[4886]: E0129 16:28:33.616099 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:28:33 crc kubenswrapper[4886]: E0129 16:28:33.616870 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.744681 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-645496d5c-x86sq"] Jan 29 16:28:33 crc kubenswrapper[4886]: W0129 16:28:33.762768 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf13b56e_deb1_4a2d_8d41_139db9eb5dbe.slice/crio-277767fe6c1e80f59467b3a2a2d85eb485fd3daaee3c03fe69cf330fdf2d3f9e WatchSource:0}: Error finding container 277767fe6c1e80f59467b3a2a2d85eb485fd3daaee3c03fe69cf330fdf2d3f9e: Status 404 returned error can't find the container with id 277767fe6c1e80f59467b3a2a2d85eb485fd3daaee3c03fe69cf330fdf2d3f9e Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.875344 4886 generic.go:334] "Generic (PLEG): container finished" podID="86cef950-d7b4-468c-bb9f-e71a98ffe676" containerID="35c0a7c8171d777037ab6ba9c183894a9159683145da271a51441c28f2fd717b" exitCode=0 Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.875410 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tsz6m" event={"ID":"86cef950-d7b4-468c-bb9f-e71a98ffe676","Type":"ContainerDied","Data":"35c0a7c8171d777037ab6ba9c183894a9159683145da271a51441c28f2fd717b"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.881554 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"c4f9c342c319c4a8afe12a37991cc1ceb0e97ac29ad4f77ada690f6b56230195"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.895269 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"277767fe6c1e80f59467b3a2a2d85eb485fd3daaee3c03fe69cf330fdf2d3f9e"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.898990 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" event={"ID":"8fc5b733-9271-4576-b06b-f6bece792d8a","Type":"ContainerStarted","Data":"aef040dc6b9566809fec01427ce7669ca9c5a316a48d7858f1b99d2e2d5aeac1"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.902055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" event={"ID":"a5d78538-806d-458c-ae3c-4ac03596fe18","Type":"ContainerStarted","Data":"0c853deda8ba1f4aae5c528542ebfb9161a926cce83619aeaaff27f2ffc6e02b"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.902088 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" event={"ID":"a5d78538-806d-458c-ae3c-4ac03596fe18","Type":"ContainerStarted","Data":"2fd57ba88d837f53e8ff09ec3605ec748ad76981bc58792c786485e48a3c66f4"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.902102 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" event={"ID":"a5d78538-806d-458c-ae3c-4ac03596fe18","Type":"ContainerStarted","Data":"394c70b2ba5552c56c03afe6e4fd4ee92b9edc2b2bd22a48af44e7f66c6b7115"} Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.917092 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-w4847" podStartSLOduration=2.120780124 podStartE2EDuration="3.917061893s" podCreationTimestamp="2026-01-29 16:28:30 +0000 UTC" firstStartedPulling="2026-01-29 16:28:31.630084258 +0000 UTC m=+394.538803530" lastFinishedPulling="2026-01-29 16:28:33.426366027 +0000 UTC m=+396.335085299" observedRunningTime="2026-01-29 16:28:33.914396709 +0000 UTC m=+396.823116001" watchObservedRunningTime="2026-01-29 16:28:33.917061893 +0000 UTC m=+396.825781165" Jan 29 16:28:33 crc kubenswrapper[4886]: I0129 16:28:33.940156 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-28t5x" podStartSLOduration=1.996917189 podStartE2EDuration="3.940116444s" podCreationTimestamp="2026-01-29 16:28:30 +0000 UTC" firstStartedPulling="2026-01-29 16:28:31.371770324 +0000 UTC m=+394.280489596" lastFinishedPulling="2026-01-29 16:28:33.314969579 +0000 UTC m=+396.223688851" observedRunningTime="2026-01-29 16:28:33.935307501 +0000 UTC m=+396.844026793" watchObservedRunningTime="2026-01-29 16:28:33.940116444 +0000 UTC m=+396.848835736" Jan 29 16:28:34 crc kubenswrapper[4886]: I0129 16:28:34.911503 4886 generic.go:334] "Generic (PLEG): container finished" podID="43bcb21d-ccb0-474a-8a4b-20c4fd56904a" containerID="c2820ceebc223c1c65aa306b7a718275e55032ebaacfee4f9a51773b5b4cbd79" exitCode=0 Jan 29 16:28:34 crc kubenswrapper[4886]: I0129 16:28:34.911652 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerDied","Data":"c2820ceebc223c1c65aa306b7a718275e55032ebaacfee4f9a51773b5b4cbd79"} Jan 29 16:28:34 crc kubenswrapper[4886]: I0129 16:28:34.914913 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tsz6m" event={"ID":"86cef950-d7b4-468c-bb9f-e71a98ffe676","Type":"ContainerStarted","Data":"c32966a40b7f6cae2b96c8b36b42bffb0cd7e95653016c4bbfe44e4392146547"} Jan 29 16:28:34 crc kubenswrapper[4886]: I0129 16:28:34.915049 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tsz6m" event={"ID":"86cef950-d7b4-468c-bb9f-e71a98ffe676","Type":"ContainerStarted","Data":"20300c24bdc1beb8993563d477c2cef1160392a91272f3b2ac54e0c098dc63c3"} Jan 29 16:28:34 crc kubenswrapper[4886]: I0129 16:28:34.970357 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-tsz6m" podStartSLOduration=3.005660485 podStartE2EDuration="4.970334067s" podCreationTimestamp="2026-01-29 16:28:30 +0000 UTC" firstStartedPulling="2026-01-29 16:28:30.963139889 +0000 UTC m=+393.871859161" lastFinishedPulling="2026-01-29 16:28:32.927813471 +0000 UTC m=+395.836532743" observedRunningTime="2026-01-29 16:28:34.964663809 +0000 UTC m=+397.873383091" watchObservedRunningTime="2026-01-29 16:28:34.970334067 +0000 UTC m=+397.879053349" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.394303 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-54754b854f-fgkbk"] Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.395009 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.415361 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54754b854f-fgkbk"] Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.573702 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-oauth-config\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.573821 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-serving-cert\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.573874 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-service-ca\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.574010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-oauth-serving-cert\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.574063 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdgg\" (UniqueName: \"kubernetes.io/projected/56fe8de1-76b0-42ad-9f62-53ac51eac78d-kube-api-access-hqdgg\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.574095 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-config\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.574151 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-trusted-ca-bundle\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.675807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-trusted-ca-bundle\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.675895 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-oauth-config\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.675927 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-serving-cert\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.675946 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-service-ca\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.675968 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-oauth-serving-cert\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.675983 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdgg\" (UniqueName: \"kubernetes.io/projected/56fe8de1-76b0-42ad-9f62-53ac51eac78d-kube-api-access-hqdgg\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.676002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-config\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.676943 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-trusted-ca-bundle\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.676965 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-config\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.677572 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-oauth-serving-cert\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.679356 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-service-ca\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.681965 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-oauth-config\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.685647 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-serving-cert\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.692069 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdgg\" (UniqueName: \"kubernetes.io/projected/56fe8de1-76b0-42ad-9f62-53ac51eac78d-kube-api-access-hqdgg\") pod \"console-54754b854f-fgkbk\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.745561 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.885760 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-75f86dc845-cd7l9"] Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.887180 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.891509 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-w5r7w" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.891864 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.892127 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-8v90ublngch0f" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.892376 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.892580 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.892782 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.899930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-75f86dc845-cd7l9"] Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/768b9eb6-0280-46a3-a61a-295bd94524a5-metrics-server-audit-profiles\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983062 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4xrw\" (UniqueName: \"kubernetes.io/projected/768b9eb6-0280-46a3-a61a-295bd94524a5-kube-api-access-p4xrw\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983088 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/768b9eb6-0280-46a3-a61a-295bd94524a5-audit-log\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983118 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-secret-metrics-client-certs\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983464 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-secret-metrics-server-tls\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983573 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/768b9eb6-0280-46a3-a61a-295bd94524a5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:35 crc kubenswrapper[4886]: I0129 16:28:35.983715 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-client-ca-bundle\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.078952 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2gkn5"] Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.079891 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087050 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/768b9eb6-0280-46a3-a61a-295bd94524a5-metrics-server-audit-profiles\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4xrw\" (UniqueName: \"kubernetes.io/projected/768b9eb6-0280-46a3-a61a-295bd94524a5-kube-api-access-p4xrw\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/768b9eb6-0280-46a3-a61a-295bd94524a5-audit-log\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087374 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-secret-metrics-client-certs\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087486 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-secret-metrics-server-tls\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/768b9eb6-0280-46a3-a61a-295bd94524a5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.087612 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-client-ca-bundle\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.089661 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/768b9eb6-0280-46a3-a61a-295bd94524a5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.090286 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/768b9eb6-0280-46a3-a61a-295bd94524a5-audit-log\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.095180 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-secret-metrics-server-tls\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.095251 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-secret-metrics-client-certs\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.095997 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768b9eb6-0280-46a3-a61a-295bd94524a5-client-ca-bundle\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.099164 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2gkn5"] Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.100509 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/768b9eb6-0280-46a3-a61a-295bd94524a5-metrics-server-audit-profiles\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.125620 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4xrw\" (UniqueName: \"kubernetes.io/projected/768b9eb6-0280-46a3-a61a-295bd94524a5-kube-api-access-p4xrw\") pod \"metrics-server-75f86dc845-cd7l9\" (UID: \"768b9eb6-0280-46a3-a61a-295bd94524a5\") " pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189358 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-registry-tls\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189417 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-registry-certificates\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189455 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-bound-sa-token\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189523 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-trusted-ca\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189561 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189597 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvh4t\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-kube-api-access-bvh4t\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.189647 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.210261 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.216588 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.291433 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-registry-tls\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.291481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-registry-certificates\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.291520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-bound-sa-token\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.291573 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.291635 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-trusted-ca\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.292914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvh4t\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-kube-api-access-bvh4t\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.292984 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.293128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-registry-certificates\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.293471 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.293739 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-trusted-ca\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.302577 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-registry-tls\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.313185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-bound-sa-token\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.314150 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.317903 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvh4t\" (UniqueName: \"kubernetes.io/projected/80d1fdd6-c3ce-47c5-8a0f-4266880adb73-kube-api-access-bvh4t\") pod \"image-registry-66df7c8f76-2gkn5\" (UID: \"80d1fdd6-c3ce-47c5-8a0f-4266880adb73\") " pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.375030 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6466f85649-t8mxw"] Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.378442 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.383398 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.384393 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.391597 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6466f85649-t8mxw"] Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.430101 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.499367 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b9eaba21-71aa-42b0-a4dd-f46aeeb38d75-monitoring-plugin-cert\") pod \"monitoring-plugin-6466f85649-t8mxw\" (UID: \"b9eaba21-71aa-42b0-a4dd-f46aeeb38d75\") " pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.600893 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b9eaba21-71aa-42b0-a4dd-f46aeeb38d75-monitoring-plugin-cert\") pod \"monitoring-plugin-6466f85649-t8mxw\" (UID: \"b9eaba21-71aa-42b0-a4dd-f46aeeb38d75\") " pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.606622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b9eaba21-71aa-42b0-a4dd-f46aeeb38d75-monitoring-plugin-cert\") pod \"monitoring-plugin-6466f85649-t8mxw\" (UID: \"b9eaba21-71aa-42b0-a4dd-f46aeeb38d75\") " pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.652477 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54754b854f-fgkbk"] Jan 29 16:28:36 crc kubenswrapper[4886]: W0129 16:28:36.659628 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56fe8de1_76b0_42ad_9f62_53ac51eac78d.slice/crio-92457371ca67ffbaa6957a21cf77005c4601275089a8ad1b5d44bb6186c2a4ce WatchSource:0}: Error finding container 92457371ca67ffbaa6957a21cf77005c4601275089a8ad1b5d44bb6186c2a4ce: Status 404 returned error can't find the container with id 92457371ca67ffbaa6957a21cf77005c4601275089a8ad1b5d44bb6186c2a4ce Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.704823 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.725373 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-75f86dc845-cd7l9"] Jan 29 16:28:36 crc kubenswrapper[4886]: W0129 16:28:36.738526 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod768b9eb6_0280_46a3_a61a_295bd94524a5.slice/crio-4eaa82eb79542e700a3dc1ebd54a2baf71ac12de9a804666daeb51f3971a6fbe WatchSource:0}: Error finding container 4eaa82eb79542e700a3dc1ebd54a2baf71ac12de9a804666daeb51f3971a6fbe: Status 404 returned error can't find the container with id 4eaa82eb79542e700a3dc1ebd54a2baf71ac12de9a804666daeb51f3971a6fbe Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.852694 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2gkn5"] Jan 29 16:28:36 crc kubenswrapper[4886]: W0129 16:28:36.855469 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d1fdd6_c3ce_47c5_8a0f_4266880adb73.slice/crio-90e4fcdc257a1de4944f96f97343d73670d7eee11293b0d101d72b9536d01b39 WatchSource:0}: Error finding container 90e4fcdc257a1de4944f96f97343d73670d7eee11293b0d101d72b9536d01b39: Status 404 returned error can't find the container with id 90e4fcdc257a1de4944f96f97343d73670d7eee11293b0d101d72b9536d01b39 Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.941761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54754b854f-fgkbk" event={"ID":"56fe8de1-76b0-42ad-9f62-53ac51eac78d","Type":"ContainerStarted","Data":"92457371ca67ffbaa6957a21cf77005c4601275089a8ad1b5d44bb6186c2a4ce"} Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.945161 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" event={"ID":"80d1fdd6-c3ce-47c5-8a0f-4266880adb73","Type":"ContainerStarted","Data":"90e4fcdc257a1de4944f96f97343d73670d7eee11293b0d101d72b9536d01b39"} Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.956009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" event={"ID":"768b9eb6-0280-46a3-a61a-295bd94524a5","Type":"ContainerStarted","Data":"4eaa82eb79542e700a3dc1ebd54a2baf71ac12de9a804666daeb51f3971a6fbe"} Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.960013 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"5a79d99b5f579a92970a818ef0dffd3600c07cc8151dbc6cc7e2b9555f8bee95"} Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.978413 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.981476 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.986308 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.986687 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.993338 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.993416 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.993476 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.993618 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-p5vdv" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.993718 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.993882 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.994039 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.994165 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-4l5e9npcpeq8g" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.994381 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 29 16:28:36 crc kubenswrapper[4886]: I0129 16:28:36.995015 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.000074 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.007710 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.095145 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6466f85649-t8mxw"] Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112752 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112848 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.112957 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-config\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113004 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113042 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113073 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-web-config\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113099 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48lp8\" (UniqueName: \"kubernetes.io/projected/48feb470-6d6f-4fa2-a419-40698fb3a20a-kube-api-access-48lp8\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113128 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48feb470-6d6f-4fa2-a419-40698fb3a20a-config-out\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113148 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113167 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113189 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48feb470-6d6f-4fa2-a419-40698fb3a20a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113220 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113244 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.113270 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.214965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48feb470-6d6f-4fa2-a419-40698fb3a20a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215066 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215107 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215205 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215234 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215268 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215285 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215319 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-config\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215446 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215475 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215506 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-web-config\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215530 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48lp8\" (UniqueName: \"kubernetes.io/projected/48feb470-6d6f-4fa2-a419-40698fb3a20a-kube-api-access-48lp8\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215549 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48feb470-6d6f-4fa2-a419-40698fb3a20a-config-out\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215570 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.215590 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.216490 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.217306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.217483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.218066 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.222683 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.222701 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-config\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.222979 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.223030 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.223378 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48feb470-6d6f-4fa2-a419-40698fb3a20a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.223874 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.224950 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.225945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.226153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.226443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48feb470-6d6f-4fa2-a419-40698fb3a20a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.227221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48feb470-6d6f-4fa2-a419-40698fb3a20a-config-out\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.231088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.232156 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48feb470-6d6f-4fa2-a419-40698fb3a20a-web-config\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.234065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48lp8\" (UniqueName: \"kubernetes.io/projected/48feb470-6d6f-4fa2-a419-40698fb3a20a-kube-api-access-48lp8\") pod \"prometheus-k8s-0\" (UID: \"48feb470-6d6f-4fa2-a419-40698fb3a20a\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.312794 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.757651 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.967673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54754b854f-fgkbk" event={"ID":"56fe8de1-76b0-42ad-9f62-53ac51eac78d","Type":"ContainerStarted","Data":"912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790"} Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.969019 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" event={"ID":"80d1fdd6-c3ce-47c5-8a0f-4266880adb73","Type":"ContainerStarted","Data":"f7f21339e8b5d9e979f032ac68d1f691b895f9169eb17316a17d9e74f3a087d8"} Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.969091 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.970216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" event={"ID":"b9eaba21-71aa-42b0-a4dd-f46aeeb38d75","Type":"ContainerStarted","Data":"474fbaf9ec6360367c8c7de16802779bff73d18a04ea0a7363497a40275621fd"} Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.972551 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"c8261013a52374865a926bb30817ba8c7e1d18820a123b54291a565eaf202a50"} Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.972589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"48fdba91188bffbdcc4503011cb66c2b6eb969cb39aa5279f885e11ef60b2240"} Jan 29 16:28:37 crc kubenswrapper[4886]: I0129 16:28:37.989013 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-54754b854f-fgkbk" podStartSLOduration=2.988994392 podStartE2EDuration="2.988994392s" podCreationTimestamp="2026-01-29 16:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:37.983711065 +0000 UTC m=+400.892430347" watchObservedRunningTime="2026-01-29 16:28:37.988994392 +0000 UTC m=+400.897713664" Jan 29 16:28:38 crc kubenswrapper[4886]: I0129 16:28:38.006164 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" podStartSLOduration=2.006141339 podStartE2EDuration="2.006141339s" podCreationTimestamp="2026-01-29 16:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:38.002526779 +0000 UTC m=+400.911246071" watchObservedRunningTime="2026-01-29 16:28:38.006141339 +0000 UTC m=+400.914860621" Jan 29 16:28:38 crc kubenswrapper[4886]: W0129 16:28:38.013603 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48feb470_6d6f_4fa2_a419_40698fb3a20a.slice/crio-98d7b8f912eb792e2359b20d73cde0c761ce41249ed31c81319b81160a79c2be WatchSource:0}: Error finding container 98d7b8f912eb792e2359b20d73cde0c761ce41249ed31c81319b81160a79c2be: Status 404 returned error can't find the container with id 98d7b8f912eb792e2359b20d73cde0c761ce41249ed31c81319b81160a79c2be Jan 29 16:28:38 crc kubenswrapper[4886]: I0129 16:28:38.979502 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"98d7b8f912eb792e2359b20d73cde0c761ce41249ed31c81319b81160a79c2be"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.985908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" event={"ID":"768b9eb6-0280-46a3-a61a-295bd94524a5","Type":"ContainerStarted","Data":"aa327d0ed65b7a7a6d9d1efaed64b62b4582738e8bb6568c032b576d5049498c"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.988067 4886 generic.go:334] "Generic (PLEG): container finished" podID="48feb470-6d6f-4fa2-a419-40698fb3a20a" containerID="864465bfce45690db67ec09c51f7784b7586fd57663ec14d23d55a7249531051" exitCode=0 Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.988103 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerDied","Data":"864465bfce45690db67ec09c51f7784b7586fd57663ec14d23d55a7249531051"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.991759 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"9dfa9660a643ebfef6fb6beb437138e9d0be1d72066ba292990b917c1f8251b9"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.991779 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"e73536c76aabc41333f27b78607e5061d2acbf3535d03e3dfc24c84ead509204"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.991788 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" event={"ID":"cf13b56e-deb1-4a2d-8d41-139db9eb5dbe","Type":"ContainerStarted","Data":"732aa0bd94ae9bf234c8fa99419b6a18949c973b4a3390e3f424b4e36d62abc0"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.991893 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.994426 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"b2929ed4083976446ae3936ef7983e104428f4ba48e0d75e9ebd94b28f258fa0"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.994444 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"71d73030becd37dcc45fe94e957c2d80ece7a3e663ac5116bdb57b61dbf19409"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.994452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"cd0fc80222db14407c42a547cf952b46e9ea7abe07369fd9d2acdac3ecdb7eb1"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.994461 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"845da9b1e1e98cb54466ef3261f8816bac1e9ca66a544ea922fb46b70c038ae7"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.998395 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" event={"ID":"b9eaba21-71aa-42b0-a4dd-f46aeeb38d75","Type":"ContainerStarted","Data":"bcd724ab6af38175b3778fb8df83bd9edb06dcb003618a18fd21213fb0ce461b"} Jan 29 16:28:39 crc kubenswrapper[4886]: I0129 16:28:39.998645 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:40 crc kubenswrapper[4886]: I0129 16:28:40.001315 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" podStartSLOduration=2.506843023 podStartE2EDuration="5.001302939s" podCreationTimestamp="2026-01-29 16:28:35 +0000 UTC" firstStartedPulling="2026-01-29 16:28:36.746654761 +0000 UTC m=+399.655374033" lastFinishedPulling="2026-01-29 16:28:39.241114677 +0000 UTC m=+402.149833949" observedRunningTime="2026-01-29 16:28:40.000837276 +0000 UTC m=+402.909556558" watchObservedRunningTime="2026-01-29 16:28:40.001302939 +0000 UTC m=+402.910022211" Jan 29 16:28:40 crc kubenswrapper[4886]: I0129 16:28:40.026370 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" podStartSLOduration=1.881509554 podStartE2EDuration="4.026356296s" podCreationTimestamp="2026-01-29 16:28:36 +0000 UTC" firstStartedPulling="2026-01-29 16:28:37.104206475 +0000 UTC m=+400.012925747" lastFinishedPulling="2026-01-29 16:28:39.249053217 +0000 UTC m=+402.157772489" observedRunningTime="2026-01-29 16:28:40.024486804 +0000 UTC m=+402.933206076" watchObservedRunningTime="2026-01-29 16:28:40.026356296 +0000 UTC m=+402.935075568" Jan 29 16:28:40 crc kubenswrapper[4886]: I0129 16:28:40.035781 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6466f85649-t8mxw" Jan 29 16:28:40 crc kubenswrapper[4886]: I0129 16:28:40.094717 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" podStartSLOduration=2.604295997 podStartE2EDuration="8.094698197s" podCreationTimestamp="2026-01-29 16:28:32 +0000 UTC" firstStartedPulling="2026-01-29 16:28:33.767759521 +0000 UTC m=+396.676478793" lastFinishedPulling="2026-01-29 16:28:39.258161691 +0000 UTC m=+402.166880993" observedRunningTime="2026-01-29 16:28:40.092101175 +0000 UTC m=+403.000820447" watchObservedRunningTime="2026-01-29 16:28:40.094698197 +0000 UTC m=+403.003417459" Jan 29 16:28:41 crc kubenswrapper[4886]: I0129 16:28:41.010369 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"a6a5f55de3dd19fea82b6bfe910ade259dfcbbfbf8e6492e7415f723d5cb1b9d"} Jan 29 16:28:41 crc kubenswrapper[4886]: I0129 16:28:41.010431 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"43bcb21d-ccb0-474a-8a4b-20c4fd56904a","Type":"ContainerStarted","Data":"e436e36913e98932295a1cded5e80be6f060628efe1ccaff4e27d5666b162782"} Jan 29 16:28:41 crc kubenswrapper[4886]: I0129 16:28:41.020425 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-645496d5c-x86sq" Jan 29 16:28:41 crc kubenswrapper[4886]: I0129 16:28:41.044397 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.7912786670000003 podStartE2EDuration="10.044369038s" podCreationTimestamp="2026-01-29 16:28:31 +0000 UTC" firstStartedPulling="2026-01-29 16:28:32.921014372 +0000 UTC m=+395.829733644" lastFinishedPulling="2026-01-29 16:28:39.174104743 +0000 UTC m=+402.082824015" observedRunningTime="2026-01-29 16:28:41.036010686 +0000 UTC m=+403.944729978" watchObservedRunningTime="2026-01-29 16:28:41.044369038 +0000 UTC m=+403.953088330" Jan 29 16:28:43 crc kubenswrapper[4886]: I0129 16:28:43.542468 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" podUID="92af746d-c60d-46a4-9be0-0ad28882ac0e" containerName="oauth-openshift" containerID="cri-o://47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e" gracePeriod=15 Jan 29 16:28:43 crc kubenswrapper[4886]: E0129 16:28:43.766024 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:28:43 crc kubenswrapper[4886]: E0129 16:28:43.766518 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q5hs7_openshift-marketplace(a7325ad0-28bf-45e0-bbd5-160f441de091): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:43 crc kubenswrapper[4886]: E0129 16:28:43.767692 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:28:43 crc kubenswrapper[4886]: I0129 16:28:43.983887 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.011510 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c659c4677-kmlgq"] Jan 29 16:28:44 crc kubenswrapper[4886]: E0129 16:28:44.011792 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92af746d-c60d-46a4-9be0-0ad28882ac0e" containerName="oauth-openshift" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.011805 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="92af746d-c60d-46a4-9be0-0ad28882ac0e" containerName="oauth-openshift" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.011952 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="92af746d-c60d-46a4-9be0-0ad28882ac0e" containerName="oauth-openshift" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.015668 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.031194 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c659c4677-kmlgq"] Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.040766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"f45282d692ecff56ac6b45257f4526bfbc95301c27c148001ac177998831b5c8"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.041058 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"c0739d6b176c7f54f83d7375bf40b37046368ae1be9d8b805d819e6fdd043b90"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.041193 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"23d39dacd0d34217d4ec721d08220e5c4de0e967c2110943736e189bf8a9483a"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.041390 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"1aa73a87069bdb664b356c00ae9795fc1f033de4a55964e20bebd5ab17ebf38d"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.041522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"9f001fc76e705569bff9b276d6f937c9a6baae37918b2c313582cc886869c062"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.041653 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"48feb470-6d6f-4fa2-a419-40698fb3a20a","Type":"ContainerStarted","Data":"5f71b5a438496a8654fa0cf90e3ff3023f14b629a79d9d8b70ad346dfcc5ec21"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.045688 4886 generic.go:334] "Generic (PLEG): container finished" podID="92af746d-c60d-46a4-9be0-0ad28882ac0e" containerID="47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e" exitCode=0 Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.045715 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.045737 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" event={"ID":"92af746d-c60d-46a4-9be0-0ad28882ac0e","Type":"ContainerDied","Data":"47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.046181 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg" event={"ID":"92af746d-c60d-46a4-9be0-0ad28882ac0e","Type":"ContainerDied","Data":"14141aff9fbd287a70454765b395ba76ef2991c8de80ea1c92111cb0e0c784c3"} Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.046209 4886 scope.go:117] "RemoveContainer" containerID="47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.067864 4886 scope.go:117] "RemoveContainer" containerID="47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e" Jan 29 16:28:44 crc kubenswrapper[4886]: E0129 16:28:44.068877 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e\": container with ID starting with 47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e not found: ID does not exist" containerID="47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.069124 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e"} err="failed to get container status \"47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e\": rpc error: code = NotFound desc = could not find container \"47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e\": container with ID starting with 47b4200b809c1086f4ae9fa69412cd5a201589369e8ff103458bcc2e4a47f38e not found: ID does not exist" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.086553 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.957244436 podStartE2EDuration="8.086536638s" podCreationTimestamp="2026-01-29 16:28:36 +0000 UTC" firstStartedPulling="2026-01-29 16:28:39.989445009 +0000 UTC m=+402.898164281" lastFinishedPulling="2026-01-29 16:28:43.118737211 +0000 UTC m=+406.027456483" observedRunningTime="2026-01-29 16:28:44.084405609 +0000 UTC m=+406.993124901" watchObservedRunningTime="2026-01-29 16:28:44.086536638 +0000 UTC m=+406.995255910" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.145991 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-dir\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.146076 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-serving-cert\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.146117 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-idp-0-file-data\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.146143 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-login\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.146399 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.147258 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-cliconfig\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.147314 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-policies\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.147853 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-869nb\" (UniqueName: \"kubernetes.io/projected/92af746d-c60d-46a4-9be0-0ad28882ac0e-kube-api-access-869nb\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148227 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-trusted-ca-bundle\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148272 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-provider-selection\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148398 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-error\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148477 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-router-certs\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148502 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-session\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148539 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-ocp-branding-template\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148658 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-service-ca\") pod \"92af746d-c60d-46a4-9be0-0ad28882ac0e\" (UID: \"92af746d-c60d-46a4-9be0-0ad28882ac0e\") " Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148941 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.148985 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-login\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149032 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149096 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149196 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9jk9\" (UniqueName: \"kubernetes.io/projected/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-kube-api-access-b9jk9\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149549 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-error\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149644 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149728 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149847 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149905 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-audit-policies\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.149963 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-audit-dir\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.150004 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.150113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-session\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.150284 4886 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.147394 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.147780 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.150153 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.150679 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.151347 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.151729 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92af746d-c60d-46a4-9be0-0ad28882ac0e-kube-api-access-869nb" (OuterVolumeSpecName: "kube-api-access-869nb") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "kube-api-access-869nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.152556 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.153058 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.154227 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.154718 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.155381 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.156244 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.157592 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "92af746d-c60d-46a4-9be0-0ad28882ac0e" (UID: "92af746d-c60d-46a4-9be0-0ad28882ac0e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252096 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9jk9\" (UniqueName: \"kubernetes.io/projected/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-kube-api-access-b9jk9\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252163 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252202 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-error\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252243 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-audit-policies\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252463 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-audit-dir\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252504 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252540 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-session\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252608 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-login\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252685 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252765 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252791 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252812 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252833 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252852 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252871 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252889 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252908 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-869nb\" (UniqueName: \"kubernetes.io/projected/92af746d-c60d-46a4-9be0-0ad28882ac0e-kube-api-access-869nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252927 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.252946 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.254936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-audit-dir\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.255007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.255068 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.255579 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-audit-policies\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.256249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.256300 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.256344 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.256359 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92af746d-c60d-46a4-9be0-0ad28882ac0e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.256724 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.261604 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-error\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.261744 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-session\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.270752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.270814 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.271879 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-user-template-login\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.273062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.277196 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9jk9\" (UniqueName: \"kubernetes.io/projected/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-kube-api-access-b9jk9\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.281937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8fdc5748-bb0c-435f-9cd3-9c093d647bf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-c659c4677-kmlgq\" (UID: \"8fdc5748-bb0c-435f-9cd3-9c093d647bf1\") " pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.331973 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.391347 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg"] Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.395691 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-r9gqg"] Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.622879 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92af746d-c60d-46a4-9be0-0ad28882ac0e" path="/var/lib/kubelet/pods/92af746d-c60d-46a4-9be0-0ad28882ac0e/volumes" Jan 29 16:28:44 crc kubenswrapper[4886]: I0129 16:28:44.761091 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c659c4677-kmlgq"] Jan 29 16:28:44 crc kubenswrapper[4886]: W0129 16:28:44.764939 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fdc5748_bb0c_435f_9cd3_9c093d647bf1.slice/crio-ccf14c137f3b2319917ab4d4372a94bb3040e4782239b64f2619fbff882e721f WatchSource:0}: Error finding container ccf14c137f3b2319917ab4d4372a94bb3040e4782239b64f2619fbff882e721f: Status 404 returned error can't find the container with id ccf14c137f3b2319917ab4d4372a94bb3040e4782239b64f2619fbff882e721f Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.058244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" event={"ID":"8fdc5748-bb0c-435f-9cd3-9c093d647bf1","Type":"ContainerStarted","Data":"29e1aa2a3cc88075c47eedab0663e96cb963c626f255371a2afe9139afeb422e"} Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.058769 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" event={"ID":"8fdc5748-bb0c-435f-9cd3-9c093d647bf1","Type":"ContainerStarted","Data":"ccf14c137f3b2319917ab4d4372a94bb3040e4782239b64f2619fbff882e721f"} Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.080829 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" podStartSLOduration=27.08081433 podStartE2EDuration="27.08081433s" podCreationTimestamp="2026-01-29 16:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:45.077073546 +0000 UTC m=+407.985792818" watchObservedRunningTime="2026-01-29 16:28:45.08081433 +0000 UTC m=+407.989533602" Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.746603 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.746665 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.748754 4886 patch_prober.go:28] interesting pod/console-54754b854f-fgkbk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 29 16:28:45 crc kubenswrapper[4886]: I0129 16:28:45.748820 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-54754b854f-fgkbk" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 29 16:28:45 crc kubenswrapper[4886]: E0129 16:28:45.749364 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:28:45 crc kubenswrapper[4886]: E0129 16:28:45.749464 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mlnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jfv6k_openshift-marketplace(69003a39-1c09-4087-a494-ebfd69e973cf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:45 crc kubenswrapper[4886]: E0129 16:28:45.751306 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:28:46 crc kubenswrapper[4886]: I0129 16:28:46.065342 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:46 crc kubenswrapper[4886]: I0129 16:28:46.072218 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c659c4677-kmlgq" Jan 29 16:28:46 crc kubenswrapper[4886]: E0129 16:28:46.746362 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:28:46 crc kubenswrapper[4886]: E0129 16:28:46.746720 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn92n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkk68_openshift-marketplace(d84ce3e9-c41a-4a08-8d86-2a918d5e9450): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:46 crc kubenswrapper[4886]: E0129 16:28:46.748185 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:28:47 crc kubenswrapper[4886]: I0129 16:28:47.313191 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:28:47 crc kubenswrapper[4886]: E0129 16:28:47.743157 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:28:47 crc kubenswrapper[4886]: E0129 16:28:47.743402 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vf7sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4qbl4_openshift-marketplace(57aa9115-b2d5-45aa-8ac3-e251c0907e45): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:47 crc kubenswrapper[4886]: E0129 16:28:47.744626 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:28:55 crc kubenswrapper[4886]: I0129 16:28:55.753550 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:55 crc kubenswrapper[4886]: I0129 16:28:55.766083 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:28:55 crc kubenswrapper[4886]: I0129 16:28:55.874508 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-frztl"] Jan 29 16:28:56 crc kubenswrapper[4886]: I0129 16:28:56.217368 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:56 crc kubenswrapper[4886]: I0129 16:28:56.217443 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:28:56 crc kubenswrapper[4886]: I0129 16:28:56.437619 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2gkn5" Jan 29 16:28:56 crc kubenswrapper[4886]: I0129 16:28:56.502803 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44l86"] Jan 29 16:28:57 crc kubenswrapper[4886]: E0129 16:28:57.616825 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:28:57 crc kubenswrapper[4886]: E0129 16:28:57.617123 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:28:58 crc kubenswrapper[4886]: E0129 16:28:58.628572 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:28:59 crc kubenswrapper[4886]: I0129 16:28:59.661043 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:28:59 crc kubenswrapper[4886]: I0129 16:28:59.661106 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:29:00 crc kubenswrapper[4886]: E0129 16:29:00.616382 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:29:08 crc kubenswrapper[4886]: E0129 16:29:08.623938 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:29:11 crc kubenswrapper[4886]: E0129 16:29:11.618365 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:29:12 crc kubenswrapper[4886]: E0129 16:29:12.617016 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:29:14 crc kubenswrapper[4886]: E0129 16:29:14.619923 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:29:16 crc kubenswrapper[4886]: I0129 16:29:16.222480 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:29:16 crc kubenswrapper[4886]: I0129 16:29:16.236881 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-75f86dc845-cd7l9" Jan 29 16:29:20 crc kubenswrapper[4886]: I0129 16:29:20.919526 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-frztl" podUID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" containerName="console" containerID="cri-o://1b0d59f7a0b0f2503aadbe69a4ed4abbcb0da9a1640279030e487d1ecaa3fce8" gracePeriod=15 Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.293279 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-frztl_ffb1a6d7-9220-473e-9fcd-8d91d590f3a5/console/0.log" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.293715 4886 generic.go:334] "Generic (PLEG): container finished" podID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" containerID="1b0d59f7a0b0f2503aadbe69a4ed4abbcb0da9a1640279030e487d1ecaa3fce8" exitCode=2 Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.293754 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-frztl" event={"ID":"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5","Type":"ContainerDied","Data":"1b0d59f7a0b0f2503aadbe69a4ed4abbcb0da9a1640279030e487d1ecaa3fce8"} Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.420269 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-frztl_ffb1a6d7-9220-473e-9fcd-8d91d590f3a5/console/0.log" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.420391 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551614 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-service-ca\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551684 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-trusted-ca-bundle\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551745 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-oauth-serving-cert\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-oauth-config\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551802 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-serving-cert\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551824 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-config\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.551860 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhgl2\" (UniqueName: \"kubernetes.io/projected/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-kube-api-access-zhgl2\") pod \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\" (UID: \"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5\") " Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.553080 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-service-ca" (OuterVolumeSpecName: "service-ca") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.553205 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.553312 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-config" (OuterVolumeSpecName: "console-config") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.553306 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.554065 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" podUID="b1d6caa5-f77a-4acf-a631-0c3abb84959c" containerName="registry" containerID="cri-o://deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f" gracePeriod=30 Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.558164 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.558220 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-kube-api-access-zhgl2" (OuterVolumeSpecName: "kube-api-access-zhgl2") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "kube-api-access-zhgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.559046 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" (UID: "ffb1a6d7-9220-473e-9fcd-8d91d590f3a5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653235 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhgl2\" (UniqueName: \"kubernetes.io/projected/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-kube-api-access-zhgl2\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653274 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653284 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653292 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653301 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653309 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:21 crc kubenswrapper[4886]: I0129 16:29:21.653319 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.303415 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-frztl_ffb1a6d7-9220-473e-9fcd-8d91d590f3a5/console/0.log" Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.303700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-frztl" event={"ID":"ffb1a6d7-9220-473e-9fcd-8d91d590f3a5","Type":"ContainerDied","Data":"f5f1eb8dc3efdd72b68491a7af9fe6df247f17abe7404590089aab88c87a64e1"} Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.303753 4886 scope.go:117] "RemoveContainer" containerID="1b0d59f7a0b0f2503aadbe69a4ed4abbcb0da9a1640279030e487d1ecaa3fce8" Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.303791 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-frztl" Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.338089 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-frztl"] Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.343283 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-frztl"] Jan 29 16:29:22 crc kubenswrapper[4886]: E0129 16:29:22.617907 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.624860 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" path="/var/lib/kubelet/pods/ffb1a6d7-9220-473e-9fcd-8d91d590f3a5/volumes" Jan 29 16:29:22 crc kubenswrapper[4886]: I0129 16:29:22.961088 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076034 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-certificates\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076395 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-bound-sa-token\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076504 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076543 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-trusted-ca\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076583 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1d6caa5-f77a-4acf-a631-0c3abb84959c-installation-pull-secrets\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076658 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-tls\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076696 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1d6caa5-f77a-4acf-a631-0c3abb84959c-ca-trust-extracted\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.076736 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vgbh\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-kube-api-access-8vgbh\") pod \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\" (UID: \"b1d6caa5-f77a-4acf-a631-0c3abb84959c\") " Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.077190 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.077383 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.082898 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.082919 4886 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.083994 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d6caa5-f77a-4acf-a631-0c3abb84959c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.084215 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-kube-api-access-8vgbh" (OuterVolumeSpecName: "kube-api-access-8vgbh") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "kube-api-access-8vgbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.084549 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.085944 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.090105 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.101539 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d6caa5-f77a-4acf-a631-0c3abb84959c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b1d6caa5-f77a-4acf-a631-0c3abb84959c" (UID: "b1d6caa5-f77a-4acf-a631-0c3abb84959c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.183874 4886 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.183915 4886 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1d6caa5-f77a-4acf-a631-0c3abb84959c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.183964 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vgbh\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-kube-api-access-8vgbh\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.183978 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1d6caa5-f77a-4acf-a631-0c3abb84959c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.183989 4886 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1d6caa5-f77a-4acf-a631-0c3abb84959c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.318569 4886 generic.go:334] "Generic (PLEG): container finished" podID="b1d6caa5-f77a-4acf-a631-0c3abb84959c" containerID="deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f" exitCode=0 Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.318609 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" event={"ID":"b1d6caa5-f77a-4acf-a631-0c3abb84959c","Type":"ContainerDied","Data":"deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f"} Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.318594 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.318644 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-44l86" event={"ID":"b1d6caa5-f77a-4acf-a631-0c3abb84959c","Type":"ContainerDied","Data":"a00a9bdfeb0d8ca50bb13348e56690ba099ee336a61298251b903a6dea3d27eb"} Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.318663 4886 scope.go:117] "RemoveContainer" containerID="deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.353050 4886 scope.go:117] "RemoveContainer" containerID="deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f" Jan 29 16:29:23 crc kubenswrapper[4886]: E0129 16:29:23.353789 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f\": container with ID starting with deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f not found: ID does not exist" containerID="deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.353841 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f"} err="failed to get container status \"deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f\": rpc error: code = NotFound desc = could not find container \"deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f\": container with ID starting with deed27046f024e80d24dc9a6d74e2361911272418a25dac03f3d34ed2d07513f not found: ID does not exist" Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.354378 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44l86"] Jan 29 16:29:23 crc kubenswrapper[4886]: I0129 16:29:23.359146 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-44l86"] Jan 29 16:29:24 crc kubenswrapper[4886]: I0129 16:29:24.629663 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d6caa5-f77a-4acf-a631-0c3abb84959c" path="/var/lib/kubelet/pods/b1d6caa5-f77a-4acf-a631-0c3abb84959c/volumes" Jan 29 16:29:25 crc kubenswrapper[4886]: E0129 16:29:25.618197 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:29:26 crc kubenswrapper[4886]: E0129 16:29:26.618661 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:29:26 crc kubenswrapper[4886]: E0129 16:29:26.749606 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:29:26 crc kubenswrapper[4886]: E0129 16:29:26.750104 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mlnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jfv6k_openshift-marketplace(69003a39-1c09-4087-a494-ebfd69e973cf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:29:26 crc kubenswrapper[4886]: E0129 16:29:26.751967 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:29:29 crc kubenswrapper[4886]: I0129 16:29:29.661231 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:29:29 crc kubenswrapper[4886]: I0129 16:29:29.661357 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:29:29 crc kubenswrapper[4886]: I0129 16:29:29.661427 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:29:29 crc kubenswrapper[4886]: I0129 16:29:29.662369 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96fb4b3b0684eec0f8e815c984345d77640459634c9d28cbf8434505ebf34891"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:29:29 crc kubenswrapper[4886]: I0129 16:29:29.662478 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://96fb4b3b0684eec0f8e815c984345d77640459634c9d28cbf8434505ebf34891" gracePeriod=600 Jan 29 16:29:30 crc kubenswrapper[4886]: I0129 16:29:30.384008 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="96fb4b3b0684eec0f8e815c984345d77640459634c9d28cbf8434505ebf34891" exitCode=0 Jan 29 16:29:30 crc kubenswrapper[4886]: I0129 16:29:30.384105 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"96fb4b3b0684eec0f8e815c984345d77640459634c9d28cbf8434505ebf34891"} Jan 29 16:29:30 crc kubenswrapper[4886]: I0129 16:29:30.384997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"ae7876e7e5e026deccf52515d738eb4b775938bb13eef71ab45573508b57aaa0"} Jan 29 16:29:30 crc kubenswrapper[4886]: I0129 16:29:30.385035 4886 scope.go:117] "RemoveContainer" containerID="8055fe73a1cd8fb346a9937fb9960eb4b8cf16950f5ed88b206f4a30871b1028" Jan 29 16:29:35 crc kubenswrapper[4886]: E0129 16:29:35.739670 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:29:35 crc kubenswrapper[4886]: E0129 16:29:35.740296 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q5hs7_openshift-marketplace(a7325ad0-28bf-45e0-bbd5-160f441de091): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:29:35 crc kubenswrapper[4886]: E0129 16:29:35.741745 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:29:37 crc kubenswrapper[4886]: I0129 16:29:37.313107 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:29:37 crc kubenswrapper[4886]: I0129 16:29:37.345577 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:29:37 crc kubenswrapper[4886]: I0129 16:29:37.452106 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 29 16:29:37 crc kubenswrapper[4886]: E0129 16:29:37.735977 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:29:37 crc kubenswrapper[4886]: E0129 16:29:37.736340 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn92n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkk68_openshift-marketplace(d84ce3e9-c41a-4a08-8d86-2a918d5e9450): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:29:37 crc kubenswrapper[4886]: E0129 16:29:37.737549 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:29:38 crc kubenswrapper[4886]: E0129 16:29:38.749531 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:29:38 crc kubenswrapper[4886]: E0129 16:29:38.749741 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vf7sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4qbl4_openshift-marketplace(57aa9115-b2d5-45aa-8ac3-e251c0907e45): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:29:38 crc kubenswrapper[4886]: E0129 16:29:38.751006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:29:40 crc kubenswrapper[4886]: E0129 16:29:40.618513 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:29:46 crc kubenswrapper[4886]: E0129 16:29:46.620755 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.051011 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-664586d6fb-g55cf"] Jan 29 16:29:50 crc kubenswrapper[4886]: E0129 16:29:50.051922 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d6caa5-f77a-4acf-a631-0c3abb84959c" containerName="registry" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.051943 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d6caa5-f77a-4acf-a631-0c3abb84959c" containerName="registry" Jan 29 16:29:50 crc kubenswrapper[4886]: E0129 16:29:50.051969 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" containerName="console" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.051981 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" containerName="console" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.052168 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb1a6d7-9220-473e-9fcd-8d91d590f3a5" containerName="console" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.052201 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d6caa5-f77a-4acf-a631-0c3abb84959c" containerName="registry" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.052903 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.069994 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-664586d6fb-g55cf"] Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.211762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-console-config\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.212158 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-serving-cert\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.212196 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-oauth-config\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.212287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-oauth-serving-cert\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.212466 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-service-ca\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.212627 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-trusted-ca-bundle\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.212780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln452\" (UniqueName: \"kubernetes.io/projected/42357e7c-de03-4b8b-80f5-f946411c67f7-kube-api-access-ln452\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-serving-cert\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314580 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-oauth-config\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314675 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-oauth-serving-cert\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314725 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-service-ca\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314772 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-trusted-ca-bundle\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314831 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln452\" (UniqueName: \"kubernetes.io/projected/42357e7c-de03-4b8b-80f5-f946411c67f7-kube-api-access-ln452\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.314883 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-console-config\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.315547 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-service-ca\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.316576 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-console-config\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.316815 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-oauth-serving-cert\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.317556 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-trusted-ca-bundle\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.323276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-serving-cert\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.330779 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-oauth-config\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.342431 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln452\" (UniqueName: \"kubernetes.io/projected/42357e7c-de03-4b8b-80f5-f946411c67f7-kube-api-access-ln452\") pod \"console-664586d6fb-g55cf\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.375567 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:29:50 crc kubenswrapper[4886]: I0129 16:29:50.630058 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-664586d6fb-g55cf"] Jan 29 16:29:51 crc kubenswrapper[4886]: I0129 16:29:51.527068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-664586d6fb-g55cf" event={"ID":"42357e7c-de03-4b8b-80f5-f946411c67f7","Type":"ContainerStarted","Data":"6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38"} Jan 29 16:29:51 crc kubenswrapper[4886]: I0129 16:29:51.527456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-664586d6fb-g55cf" event={"ID":"42357e7c-de03-4b8b-80f5-f946411c67f7","Type":"ContainerStarted","Data":"4c6fe087595c24e70608f508c9599d4ead9e60d5c503746f12585384b13bc295"} Jan 29 16:29:51 crc kubenswrapper[4886]: I0129 16:29:51.549462 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-664586d6fb-g55cf" podStartSLOduration=1.5494434419999998 podStartE2EDuration="1.549443442s" podCreationTimestamp="2026-01-29 16:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:51.54526108 +0000 UTC m=+474.453980372" watchObservedRunningTime="2026-01-29 16:29:51.549443442 +0000 UTC m=+474.458162724" Jan 29 16:29:51 crc kubenswrapper[4886]: E0129 16:29:51.617317 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:29:51 crc kubenswrapper[4886]: E0129 16:29:51.617706 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:29:55 crc kubenswrapper[4886]: E0129 16:29:55.619399 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:29:59 crc kubenswrapper[4886]: E0129 16:29:59.617132 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.176958 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9"] Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.177928 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.182262 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.182391 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.187987 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9"] Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.289160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmrl\" (UniqueName: \"kubernetes.io/projected/18290a86-b94a-42c5-9f50-1614077f881b-kube-api-access-cqmrl\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.289244 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18290a86-b94a-42c5-9f50-1614077f881b-secret-volume\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.289425 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18290a86-b94a-42c5-9f50-1614077f881b-config-volume\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.376365 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.376415 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.382173 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.390392 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18290a86-b94a-42c5-9f50-1614077f881b-secret-volume\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.390446 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18290a86-b94a-42c5-9f50-1614077f881b-config-volume\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.390518 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmrl\" (UniqueName: \"kubernetes.io/projected/18290a86-b94a-42c5-9f50-1614077f881b-kube-api-access-cqmrl\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.391794 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18290a86-b94a-42c5-9f50-1614077f881b-config-volume\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.398287 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18290a86-b94a-42c5-9f50-1614077f881b-secret-volume\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.406809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmrl\" (UniqueName: \"kubernetes.io/projected/18290a86-b94a-42c5-9f50-1614077f881b-kube-api-access-cqmrl\") pod \"collect-profiles-29495070-xnbx9\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.502071 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.600287 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.675529 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-54754b854f-fgkbk"] Jan 29 16:30:00 crc kubenswrapper[4886]: I0129 16:30:00.781902 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9"] Jan 29 16:30:01 crc kubenswrapper[4886]: I0129 16:30:01.602034 4886 generic.go:334] "Generic (PLEG): container finished" podID="18290a86-b94a-42c5-9f50-1614077f881b" containerID="5f38a23b3e231c3670461bd30eb72fab48714dac00ff0dbd8042edb99ce295c4" exitCode=0 Jan 29 16:30:01 crc kubenswrapper[4886]: I0129 16:30:01.602417 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" event={"ID":"18290a86-b94a-42c5-9f50-1614077f881b","Type":"ContainerDied","Data":"5f38a23b3e231c3670461bd30eb72fab48714dac00ff0dbd8042edb99ce295c4"} Jan 29 16:30:01 crc kubenswrapper[4886]: I0129 16:30:01.602676 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" event={"ID":"18290a86-b94a-42c5-9f50-1614077f881b","Type":"ContainerStarted","Data":"71c1d5e9632004d3ae72c5f6e641a2523e7dd35669f7e1827e51af004d1e1ae3"} Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.838203 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.930487 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqmrl\" (UniqueName: \"kubernetes.io/projected/18290a86-b94a-42c5-9f50-1614077f881b-kube-api-access-cqmrl\") pod \"18290a86-b94a-42c5-9f50-1614077f881b\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.930640 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18290a86-b94a-42c5-9f50-1614077f881b-config-volume\") pod \"18290a86-b94a-42c5-9f50-1614077f881b\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.930671 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18290a86-b94a-42c5-9f50-1614077f881b-secret-volume\") pod \"18290a86-b94a-42c5-9f50-1614077f881b\" (UID: \"18290a86-b94a-42c5-9f50-1614077f881b\") " Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.931531 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18290a86-b94a-42c5-9f50-1614077f881b-config-volume" (OuterVolumeSpecName: "config-volume") pod "18290a86-b94a-42c5-9f50-1614077f881b" (UID: "18290a86-b94a-42c5-9f50-1614077f881b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.936168 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18290a86-b94a-42c5-9f50-1614077f881b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18290a86-b94a-42c5-9f50-1614077f881b" (UID: "18290a86-b94a-42c5-9f50-1614077f881b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4886]: I0129 16:30:02.936775 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18290a86-b94a-42c5-9f50-1614077f881b-kube-api-access-cqmrl" (OuterVolumeSpecName: "kube-api-access-cqmrl") pod "18290a86-b94a-42c5-9f50-1614077f881b" (UID: "18290a86-b94a-42c5-9f50-1614077f881b"). InnerVolumeSpecName "kube-api-access-cqmrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:03 crc kubenswrapper[4886]: I0129 16:30:03.031728 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18290a86-b94a-42c5-9f50-1614077f881b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4886]: I0129 16:30:03.031759 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18290a86-b94a-42c5-9f50-1614077f881b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4886]: I0129 16:30:03.031770 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqmrl\" (UniqueName: \"kubernetes.io/projected/18290a86-b94a-42c5-9f50-1614077f881b-kube-api-access-cqmrl\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4886]: I0129 16:30:03.613281 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" event={"ID":"18290a86-b94a-42c5-9f50-1614077f881b","Type":"ContainerDied","Data":"71c1d5e9632004d3ae72c5f6e641a2523e7dd35669f7e1827e51af004d1e1ae3"} Jan 29 16:30:03 crc kubenswrapper[4886]: I0129 16:30:03.613698 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71c1d5e9632004d3ae72c5f6e641a2523e7dd35669f7e1827e51af004d1e1ae3" Jan 29 16:30:03 crc kubenswrapper[4886]: I0129 16:30:03.613410 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9" Jan 29 16:30:04 crc kubenswrapper[4886]: E0129 16:30:04.616916 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:30:04 crc kubenswrapper[4886]: E0129 16:30:04.616950 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:30:08 crc kubenswrapper[4886]: E0129 16:30:08.624136 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:30:12 crc kubenswrapper[4886]: E0129 16:30:12.617203 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:30:16 crc kubenswrapper[4886]: E0129 16:30:16.617957 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:30:17 crc kubenswrapper[4886]: E0129 16:30:17.617247 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:30:21 crc kubenswrapper[4886]: E0129 16:30:21.616363 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:30:24 crc kubenswrapper[4886]: E0129 16:30:24.618006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:30:25 crc kubenswrapper[4886]: I0129 16:30:25.730829 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-54754b854f-fgkbk" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerName="console" containerID="cri-o://912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790" gracePeriod=15 Jan 29 16:30:25 crc kubenswrapper[4886]: I0129 16:30:25.746802 4886 patch_prober.go:28] interesting pod/console-54754b854f-fgkbk container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 29 16:30:25 crc kubenswrapper[4886]: I0129 16:30:25.746857 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-54754b854f-fgkbk" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.159475 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-54754b854f-fgkbk_56fe8de1-76b0-42ad-9f62-53ac51eac78d/console/0.log" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.159813 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294544 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-service-ca\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294597 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-trusted-ca-bundle\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294712 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-config\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294747 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-serving-cert\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294802 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-oauth-serving-cert\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294839 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-oauth-config\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.294882 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqdgg\" (UniqueName: \"kubernetes.io/projected/56fe8de1-76b0-42ad-9f62-53ac51eac78d-kube-api-access-hqdgg\") pod \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\" (UID: \"56fe8de1-76b0-42ad-9f62-53ac51eac78d\") " Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.295450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.295588 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-service-ca" (OuterVolumeSpecName: "service-ca") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.296026 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-config" (OuterVolumeSpecName: "console-config") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.296215 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.299506 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.299851 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.300632 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fe8de1-76b0-42ad-9f62-53ac51eac78d-kube-api-access-hqdgg" (OuterVolumeSpecName: "kube-api-access-hqdgg") pod "56fe8de1-76b0-42ad-9f62-53ac51eac78d" (UID: "56fe8de1-76b0-42ad-9f62-53ac51eac78d"). InnerVolumeSpecName "kube-api-access-hqdgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.396595 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.396828 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.396886 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.396934 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56fe8de1-76b0-42ad-9f62-53ac51eac78d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.396982 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqdgg\" (UniqueName: \"kubernetes.io/projected/56fe8de1-76b0-42ad-9f62-53ac51eac78d-kube-api-access-hqdgg\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.397039 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.397087 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fe8de1-76b0-42ad-9f62-53ac51eac78d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.758426 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-54754b854f-fgkbk_56fe8de1-76b0-42ad-9f62-53ac51eac78d/console/0.log" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.758475 4886 generic.go:334] "Generic (PLEG): container finished" podID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerID="912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790" exitCode=2 Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.758511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54754b854f-fgkbk" event={"ID":"56fe8de1-76b0-42ad-9f62-53ac51eac78d","Type":"ContainerDied","Data":"912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790"} Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.758543 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54754b854f-fgkbk" event={"ID":"56fe8de1-76b0-42ad-9f62-53ac51eac78d","Type":"ContainerDied","Data":"92457371ca67ffbaa6957a21cf77005c4601275089a8ad1b5d44bb6186c2a4ce"} Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.758571 4886 scope.go:117] "RemoveContainer" containerID="912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.758691 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54754b854f-fgkbk" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.780175 4886 scope.go:117] "RemoveContainer" containerID="912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790" Jan 29 16:30:26 crc kubenswrapper[4886]: E0129 16:30:26.780904 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790\": container with ID starting with 912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790 not found: ID does not exist" containerID="912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.780973 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790"} err="failed to get container status \"912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790\": rpc error: code = NotFound desc = could not find container \"912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790\": container with ID starting with 912b8ca8f57d0bc2a261b229c7ccc6eafc982f004db336b3f33746c6d8c5a790 not found: ID does not exist" Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.782909 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-54754b854f-fgkbk"] Jan 29 16:30:26 crc kubenswrapper[4886]: I0129 16:30:26.793137 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-54754b854f-fgkbk"] Jan 29 16:30:28 crc kubenswrapper[4886]: I0129 16:30:28.623516 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" path="/var/lib/kubelet/pods/56fe8de1-76b0-42ad-9f62-53ac51eac78d/volumes" Jan 29 16:30:29 crc kubenswrapper[4886]: E0129 16:30:29.617530 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:30:30 crc kubenswrapper[4886]: E0129 16:30:30.617504 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:30:34 crc kubenswrapper[4886]: E0129 16:30:34.618446 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:30:38 crc kubenswrapper[4886]: E0129 16:30:38.621893 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:30:43 crc kubenswrapper[4886]: E0129 16:30:43.618057 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:30:45 crc kubenswrapper[4886]: E0129 16:30:45.616804 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:30:49 crc kubenswrapper[4886]: I0129 16:30:49.618960 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:30:49 crc kubenswrapper[4886]: E0129 16:30:49.750147 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:30:49 crc kubenswrapper[4886]: E0129 16:30:49.750310 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mlnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jfv6k_openshift-marketplace(69003a39-1c09-4087-a494-ebfd69e973cf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:30:49 crc kubenswrapper[4886]: E0129 16:30:49.751495 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:30:52 crc kubenswrapper[4886]: E0129 16:30:52.616511 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:30:55 crc kubenswrapper[4886]: E0129 16:30:55.617896 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:31:00 crc kubenswrapper[4886]: E0129 16:31:00.766961 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:31:00 crc kubenswrapper[4886]: E0129 16:31:00.767679 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vf7sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4qbl4_openshift-marketplace(57aa9115-b2d5-45aa-8ac3-e251c0907e45): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:00 crc kubenswrapper[4886]: E0129 16:31:00.769029 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:31:01 crc kubenswrapper[4886]: E0129 16:31:01.616942 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:31:07 crc kubenswrapper[4886]: E0129 16:31:07.746402 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:31:07 crc kubenswrapper[4886]: E0129 16:31:07.747546 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q5hs7_openshift-marketplace(a7325ad0-28bf-45e0-bbd5-160f441de091): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:07 crc kubenswrapper[4886]: E0129 16:31:07.748883 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:31:08 crc kubenswrapper[4886]: E0129 16:31:08.812692 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:31:08 crc kubenswrapper[4886]: E0129 16:31:08.812902 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn92n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkk68_openshift-marketplace(d84ce3e9-c41a-4a08-8d86-2a918d5e9450): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:08 crc kubenswrapper[4886]: E0129 16:31:08.814902 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:31:13 crc kubenswrapper[4886]: E0129 16:31:13.616117 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:31:14 crc kubenswrapper[4886]: E0129 16:31:14.617026 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:31:19 crc kubenswrapper[4886]: E0129 16:31:19.617502 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:31:20 crc kubenswrapper[4886]: E0129 16:31:20.618893 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:31:25 crc kubenswrapper[4886]: E0129 16:31:25.617271 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:31:26 crc kubenswrapper[4886]: E0129 16:31:26.617945 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:31:29 crc kubenswrapper[4886]: I0129 16:31:29.660440 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:31:29 crc kubenswrapper[4886]: I0129 16:31:29.660785 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:31:30 crc kubenswrapper[4886]: E0129 16:31:30.616936 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:31:33 crc kubenswrapper[4886]: E0129 16:31:33.617033 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:31:38 crc kubenswrapper[4886]: E0129 16:31:38.623349 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:31:41 crc kubenswrapper[4886]: E0129 16:31:41.617413 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:31:45 crc kubenswrapper[4886]: E0129 16:31:45.617685 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:31:48 crc kubenswrapper[4886]: E0129 16:31:48.623428 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:31:52 crc kubenswrapper[4886]: E0129 16:31:52.617582 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:31:56 crc kubenswrapper[4886]: E0129 16:31:56.616566 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:31:59 crc kubenswrapper[4886]: I0129 16:31:59.661730 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:31:59 crc kubenswrapper[4886]: I0129 16:31:59.662161 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:32:00 crc kubenswrapper[4886]: E0129 16:32:00.616590 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:32:01 crc kubenswrapper[4886]: E0129 16:32:01.616181 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:32:05 crc kubenswrapper[4886]: E0129 16:32:05.616627 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:32:07 crc kubenswrapper[4886]: E0129 16:32:07.617425 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:32:12 crc kubenswrapper[4886]: E0129 16:32:12.617095 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:32:13 crc kubenswrapper[4886]: E0129 16:32:13.616069 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:32:17 crc kubenswrapper[4886]: E0129 16:32:17.616892 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:32:20 crc kubenswrapper[4886]: E0129 16:32:20.618010 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:32:27 crc kubenswrapper[4886]: E0129 16:32:27.618397 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:32:27 crc kubenswrapper[4886]: E0129 16:32:27.619307 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:32:29 crc kubenswrapper[4886]: I0129 16:32:29.661616 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:32:29 crc kubenswrapper[4886]: I0129 16:32:29.661706 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:32:29 crc kubenswrapper[4886]: I0129 16:32:29.661771 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:32:29 crc kubenswrapper[4886]: I0129 16:32:29.662847 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae7876e7e5e026deccf52515d738eb4b775938bb13eef71ab45573508b57aaa0"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:32:29 crc kubenswrapper[4886]: I0129 16:32:29.662959 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://ae7876e7e5e026deccf52515d738eb4b775938bb13eef71ab45573508b57aaa0" gracePeriod=600 Jan 29 16:32:30 crc kubenswrapper[4886]: I0129 16:32:30.612594 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="ae7876e7e5e026deccf52515d738eb4b775938bb13eef71ab45573508b57aaa0" exitCode=0 Jan 29 16:32:30 crc kubenswrapper[4886]: I0129 16:32:30.612691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"ae7876e7e5e026deccf52515d738eb4b775938bb13eef71ab45573508b57aaa0"} Jan 29 16:32:30 crc kubenswrapper[4886]: I0129 16:32:30.613313 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"773fe28c1c2f4b4e6b5a35ea611b7d8ab8f392d8f1b68bb09ec93e5c483b53ed"} Jan 29 16:32:30 crc kubenswrapper[4886]: I0129 16:32:30.613370 4886 scope.go:117] "RemoveContainer" containerID="96fb4b3b0684eec0f8e815c984345d77640459634c9d28cbf8434505ebf34891" Jan 29 16:32:30 crc kubenswrapper[4886]: E0129 16:32:30.617894 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:32:31 crc kubenswrapper[4886]: E0129 16:32:31.617535 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:32:38 crc kubenswrapper[4886]: E0129 16:32:38.619939 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:32:40 crc kubenswrapper[4886]: E0129 16:32:40.616641 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:32:44 crc kubenswrapper[4886]: E0129 16:32:44.618014 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:32:46 crc kubenswrapper[4886]: E0129 16:32:46.617555 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:32:49 crc kubenswrapper[4886]: E0129 16:32:49.616440 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:32:53 crc kubenswrapper[4886]: E0129 16:32:53.618240 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:32:56 crc kubenswrapper[4886]: E0129 16:32:56.618104 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:32:57 crc kubenswrapper[4886]: E0129 16:32:57.619131 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:33:01 crc kubenswrapper[4886]: E0129 16:33:01.617984 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:33:08 crc kubenswrapper[4886]: E0129 16:33:08.623440 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:33:09 crc kubenswrapper[4886]: E0129 16:33:09.616008 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:33:09 crc kubenswrapper[4886]: E0129 16:33:09.616274 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:33:14 crc kubenswrapper[4886]: E0129 16:33:14.619411 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.621139 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bsnwn"] Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622184 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="nbdb" containerID="cri-o://aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622239 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622386 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="northd" containerID="cri-o://1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622312 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-acl-logging" containerID="cri-o://54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622569 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="sbdb" containerID="cri-o://38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622587 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-node" containerID="cri-o://34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.622676 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-controller" containerID="cri-o://b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.671645 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" containerID="cri-o://f3e810b92c533dbff0b37232e3b59d6146e02214a9506edd851862a6737312a5" gracePeriod=30 Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.674561 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.674949 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.677570 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.677687 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.679596 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.679657 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="nbdb" Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.680282 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.680359 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="sbdb" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.981450 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovnkube-controller/3.log" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.984722 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-acl-logging/0.log" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985234 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-controller/0.log" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985615 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="f3e810b92c533dbff0b37232e3b59d6146e02214a9506edd851862a6737312a5" exitCode=0 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985641 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5" exitCode=0 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985651 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8" exitCode=0 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985660 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af" exitCode=0 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985670 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8" exitCode=143 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985681 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51" exitCode=143 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985703 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"f3e810b92c533dbff0b37232e3b59d6146e02214a9506edd851862a6737312a5"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985737 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985757 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985779 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.985802 4886 scope.go:117] "RemoveContainer" containerID="a0641acb8929ee41033e4169acb367c2a8a89a440e89fc29dde22190651e439f" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.987550 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/2.log" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.988040 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/1.log" Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.988094 4886 generic.go:334] "Generic (PLEG): container finished" podID="b415d17e-f329-40e7-8a3f-32881cb5347a" containerID="e74f1c8b65fe500a145e8a234d995565d439027c89c5aa1da47c13b626c7d606" exitCode=2 Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.988129 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerDied","Data":"e74f1c8b65fe500a145e8a234d995565d439027c89c5aa1da47c13b626c7d606"} Jan 29 16:33:19 crc kubenswrapper[4886]: I0129 16:33:19.988732 4886 scope.go:117] "RemoveContainer" containerID="e74f1c8b65fe500a145e8a234d995565d439027c89c5aa1da47c13b626c7d606" Jan 29 16:33:19 crc kubenswrapper[4886]: E0129 16:33:19.989041 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4dstj_openshift-multus(b415d17e-f329-40e7-8a3f-32881cb5347a)\"" pod="openshift-multus/multus-4dstj" podUID="b415d17e-f329-40e7-8a3f-32881cb5347a" Jan 29 16:33:20 crc kubenswrapper[4886]: I0129 16:33:20.021452 4886 scope.go:117] "RemoveContainer" containerID="0fbf425aaf0e257fa72dc096677e8404be047665a998729a21862b66d4162248" Jan 29 16:33:20 crc kubenswrapper[4886]: I0129 16:33:20.997218 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/2.log" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.003323 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-acl-logging/0.log" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.003944 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-controller/0.log" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.004488 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a" exitCode=0 Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.004540 4886 generic.go:334] "Generic (PLEG): container finished" podID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerID="34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454" exitCode=0 Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.004574 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a"} Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.004613 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454"} Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.340735 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-acl-logging/0.log" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.341664 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-controller/0.log" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.342386 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420383 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fm92b"] Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420769 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420805 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420830 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="sbdb" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420842 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="sbdb" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420859 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420871 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420885 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kubecfg-setup" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420897 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kubecfg-setup" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420917 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-acl-logging" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420930 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-acl-logging" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420947 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420963 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.420983 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.420998 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421023 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerName="console" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421039 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerName="console" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421062 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="nbdb" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421077 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="nbdb" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421094 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-node" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421110 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-node" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421140 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="northd" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421157 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="northd" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421183 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421202 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421222 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421239 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421259 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18290a86-b94a-42c5-9f50-1614077f881b" containerName="collect-profiles" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421275 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="18290a86-b94a-42c5-9f50-1614077f881b" containerName="collect-profiles" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421508 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421532 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421548 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="18290a86-b94a-42c5-9f50-1614077f881b" containerName="collect-profiles" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421592 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="sbdb" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421614 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421629 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovn-acl-logging" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421644 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421658 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="kube-rbac-proxy-node" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421675 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="northd" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421691 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fe8de1-76b0-42ad-9f62-53ac51eac78d" containerName="console" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421713 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="nbdb" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421731 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.421924 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.421941 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.422131 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.422514 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" containerName="ovnkube-controller" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.424909 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504556 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-ovn\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-node-log\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504668 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-ovn-kubernetes\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504710 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504711 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504796 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-node-log" (OuterVolumeSpecName: "node-log") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504738 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504830 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504766 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-netd\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504932 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-script-lib\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504970 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-systemd-units\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.504998 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-openvswitch\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505031 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-var-lib-openvswitch\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505062 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505076 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-env-overrides\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505105 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505112 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-etc-openvswitch\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505148 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505208 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-slash\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505266 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-systemd\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505303 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-netns\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505368 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-slash" (OuterVolumeSpecName: "host-slash") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8f8x\" (UniqueName: \"kubernetes.io/projected/d46238ab-90d4-41b8-b546-6dbff06cf5ed-kube-api-access-h8f8x\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-log-socket\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505570 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-kubelet\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505587 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505578 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505661 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-log-socket" (OuterVolumeSpecName: "log-socket") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505645 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505630 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovn-node-metrics-cert\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505765 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-config\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505794 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505828 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-bin\") pod \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\" (UID: \"d46238ab-90d4-41b8-b546-6dbff06cf5ed\") " Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.505935 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506291 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506452 4886 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506492 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506513 4886 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506531 4886 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506547 4886 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506562 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506578 4886 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506594 4886 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506609 4886 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506625 4886 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506640 4886 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506656 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506674 4886 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506690 4886 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506705 4886 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506721 4886 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.506741 4886 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.513010 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.513137 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46238ab-90d4-41b8-b546-6dbff06cf5ed-kube-api-access-h8f8x" (OuterVolumeSpecName: "kube-api-access-h8f8x") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "kube-api-access-h8f8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.536260 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d46238ab-90d4-41b8-b546-6dbff06cf5ed" (UID: "d46238ab-90d4-41b8-b546-6dbff06cf5ed"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.608128 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-systemd-units\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.608484 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-var-lib-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.608725 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-run-ovn-kubernetes\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.608936 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609109 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwf9m\" (UniqueName: \"kubernetes.io/projected/19111cdf-053c-4093-af99-ad30edda5ec8-kube-api-access-qwf9m\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609259 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19111cdf-053c-4093-af99-ad30edda5ec8-ovn-node-metrics-cert\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609620 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-ovnkube-script-lib\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-run-netns\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609807 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-slash\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609876 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-etc-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609929 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-cni-bin\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609960 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-ovn\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.609978 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-cni-netd\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610000 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-log-socket\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610024 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-node-log\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610044 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-systemd\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610084 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-kubelet\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610102 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-ovnkube-config\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610150 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610180 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-env-overrides\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610252 4886 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d46238ab-90d4-41b8-b546-6dbff06cf5ed-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610269 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8f8x\" (UniqueName: \"kubernetes.io/projected/d46238ab-90d4-41b8-b546-6dbff06cf5ed-kube-api-access-h8f8x\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.610282 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d46238ab-90d4-41b8-b546-6dbff06cf5ed-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:21 crc kubenswrapper[4886]: E0129 16:33:21.619230 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711548 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-run-ovn-kubernetes\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711615 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwf9m\" (UniqueName: \"kubernetes.io/projected/19111cdf-053c-4093-af99-ad30edda5ec8-kube-api-access-qwf9m\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711666 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19111cdf-053c-4093-af99-ad30edda5ec8-ovn-node-metrics-cert\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711687 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-ovnkube-script-lib\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-run-netns\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711745 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-slash\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-etc-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711826 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-cni-bin\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-ovn\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-cni-netd\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.711863 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-cni-netd\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-log-socket\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-node-log\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-systemd\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712200 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-ovnkube-config\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712250 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-kubelet\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712388 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712434 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-log-socket\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712490 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-kubelet\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712510 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712437 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-run-ovn-kubernetes\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712391 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-node-log\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-systemd\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712591 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-etc-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712644 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-run-netns\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-slash\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712703 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-ovnkube-script-lib\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712443 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-env-overrides\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-var-lib-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712775 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-systemd-units\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712841 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-systemd-units\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712917 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-run-ovn\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712943 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-var-lib-openvswitch\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.712752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19111cdf-053c-4093-af99-ad30edda5ec8-host-cni-bin\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.713187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-env-overrides\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.713635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19111cdf-053c-4093-af99-ad30edda5ec8-ovnkube-config\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.715364 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19111cdf-053c-4093-af99-ad30edda5ec8-ovn-node-metrics-cert\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.728796 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwf9m\" (UniqueName: \"kubernetes.io/projected/19111cdf-053c-4093-af99-ad30edda5ec8-kube-api-access-qwf9m\") pod \"ovnkube-node-fm92b\" (UID: \"19111cdf-053c-4093-af99-ad30edda5ec8\") " pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: I0129 16:33:21.747762 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:21 crc kubenswrapper[4886]: W0129 16:33:21.776673 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19111cdf_053c_4093_af99_ad30edda5ec8.slice/crio-135b31cd61eff929a85b1b414c60bba00d2e4e06835617685664137fa559c05d WatchSource:0}: Error finding container 135b31cd61eff929a85b1b414c60bba00d2e4e06835617685664137fa559c05d: Status 404 returned error can't find the container with id 135b31cd61eff929a85b1b414c60bba00d2e4e06835617685664137fa559c05d Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.012752 4886 generic.go:334] "Generic (PLEG): container finished" podID="19111cdf-053c-4093-af99-ad30edda5ec8" containerID="f683edcd1501f89a0b295a6a611bc59d07cfb788312c5f3e8fcb1155e41df8d2" exitCode=0 Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.012886 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerDied","Data":"f683edcd1501f89a0b295a6a611bc59d07cfb788312c5f3e8fcb1155e41df8d2"} Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.013160 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"135b31cd61eff929a85b1b414c60bba00d2e4e06835617685664137fa559c05d"} Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.020503 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-acl-logging/0.log" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.021248 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bsnwn_d46238ab-90d4-41b8-b546-6dbff06cf5ed/ovn-controller/0.log" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.021975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" event={"ID":"d46238ab-90d4-41b8-b546-6dbff06cf5ed","Type":"ContainerDied","Data":"4945a9e8ab72e79012e84ebf83643f2ee2b4c4028b579b7a2f7381c763968861"} Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.022041 4886 scope.go:117] "RemoveContainer" containerID="f3e810b92c533dbff0b37232e3b59d6146e02214a9506edd851862a6737312a5" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.022256 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bsnwn" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.065784 4886 scope.go:117] "RemoveContainer" containerID="38f5a9a3458a900401d93f99197abc69e3baaf3038a89e74d142344fbf0d9ff5" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.074483 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bsnwn"] Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.079172 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bsnwn"] Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.106209 4886 scope.go:117] "RemoveContainer" containerID="aff586e7c8306a470164e6d1603b7a84b79e22ff53f7871cff535736f72f77b8" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.127846 4886 scope.go:117] "RemoveContainer" containerID="1103e45d1299bd7cc9890cc70e1b35be3c7e5cdc36cdc23191cb32c65b6851af" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.144584 4886 scope.go:117] "RemoveContainer" containerID="db747d554077a641bca85a4b376af5cc3ebe9e9addb59303e40961567d28422a" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.166978 4886 scope.go:117] "RemoveContainer" containerID="34083f87301d604fb38ce6765e0d429895295ab0c89f02abfc1cfde1d71f4454" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.193166 4886 scope.go:117] "RemoveContainer" containerID="54fecd80df24f20c923283f6966a565b8cf9cee51d2194836164df5fc69600b8" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.215230 4886 scope.go:117] "RemoveContainer" containerID="b912acee2b3fec4fd1d0704a94a867e79b9191286159220760027325f0709c51" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.232115 4886 scope.go:117] "RemoveContainer" containerID="f18adfac47665579e806165f73793a4a301dcd95317ce1ac58ab8c4551aab72b" Jan 29 16:33:22 crc kubenswrapper[4886]: E0129 16:33:22.618782 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:33:22 crc kubenswrapper[4886]: I0129 16:33:22.639979 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46238ab-90d4-41b8-b546-6dbff06cf5ed" path="/var/lib/kubelet/pods/d46238ab-90d4-41b8-b546-6dbff06cf5ed/volumes" Jan 29 16:33:23 crc kubenswrapper[4886]: I0129 16:33:23.031169 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"58729ca7ba88813b953ef04ed4a802c907de57bbeacf683e7b8182a8761c8104"} Jan 29 16:33:23 crc kubenswrapper[4886]: I0129 16:33:23.031218 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"04b3572bac5235c653957af0253cc167698dffb3f729f9847af808b432395b10"} Jan 29 16:33:23 crc kubenswrapper[4886]: I0129 16:33:23.031230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"3f9ce34cbaedc840b516dff954995b2e4306927f416cc6d1c3da421fec5b8c77"} Jan 29 16:33:23 crc kubenswrapper[4886]: I0129 16:33:23.031242 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"05408936123b4483b582625e2140810b7656b3c4c86da278a698264562d7a238"} Jan 29 16:33:23 crc kubenswrapper[4886]: I0129 16:33:23.031255 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"992a9a12d58b004fc5045fa851c0c5d8ddfd906aa6008b79e78cadc867d9eb25"} Jan 29 16:33:23 crc kubenswrapper[4886]: I0129 16:33:23.031266 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"6c5f1c457e473a871404a2ebb2563ac4ed21af223cc41661b3e680b2020432cb"} Jan 29 16:33:23 crc kubenswrapper[4886]: E0129 16:33:23.616441 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:33:25 crc kubenswrapper[4886]: I0129 16:33:25.050388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"d145893750c9188369a0cc42d9e3cf847a7ab3852baf0847c0f303fd783f77b5"} Jan 29 16:33:26 crc kubenswrapper[4886]: E0129 16:33:26.618683 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:33:28 crc kubenswrapper[4886]: I0129 16:33:28.073977 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" event={"ID":"19111cdf-053c-4093-af99-ad30edda5ec8","Type":"ContainerStarted","Data":"fc15846ede655b62a2d71f8d125d4b6bdac031667a3d448eb4b594a8415eaca0"} Jan 29 16:33:28 crc kubenswrapper[4886]: I0129 16:33:28.074229 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:28 crc kubenswrapper[4886]: I0129 16:33:28.074374 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:28 crc kubenswrapper[4886]: I0129 16:33:28.099462 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:28 crc kubenswrapper[4886]: I0129 16:33:28.147959 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" podStartSLOduration=7.147937595 podStartE2EDuration="7.147937595s" podCreationTimestamp="2026-01-29 16:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:33:28.105685099 +0000 UTC m=+691.014404381" watchObservedRunningTime="2026-01-29 16:33:28.147937595 +0000 UTC m=+691.056656887" Jan 29 16:33:29 crc kubenswrapper[4886]: I0129 16:33:29.085718 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:29 crc kubenswrapper[4886]: I0129 16:33:29.111776 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:32 crc kubenswrapper[4886]: E0129 16:33:32.617918 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:33:33 crc kubenswrapper[4886]: I0129 16:33:33.615905 4886 scope.go:117] "RemoveContainer" containerID="e74f1c8b65fe500a145e8a234d995565d439027c89c5aa1da47c13b626c7d606" Jan 29 16:33:33 crc kubenswrapper[4886]: E0129 16:33:33.616574 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4dstj_openshift-multus(b415d17e-f329-40e7-8a3f-32881cb5347a)\"" pod="openshift-multus/multus-4dstj" podUID="b415d17e-f329-40e7-8a3f-32881cb5347a" Jan 29 16:33:34 crc kubenswrapper[4886]: E0129 16:33:34.736917 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:33:34 crc kubenswrapper[4886]: E0129 16:33:34.737399 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mlnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jfv6k_openshift-marketplace(69003a39-1c09-4087-a494-ebfd69e973cf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:34 crc kubenswrapper[4886]: E0129 16:33:34.738873 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:33:35 crc kubenswrapper[4886]: E0129 16:33:35.617897 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" Jan 29 16:33:40 crc kubenswrapper[4886]: E0129 16:33:40.618998 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:33:43 crc kubenswrapper[4886]: E0129 16:33:43.617382 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" Jan 29 16:33:44 crc kubenswrapper[4886]: I0129 16:33:44.615217 4886 scope.go:117] "RemoveContainer" containerID="e74f1c8b65fe500a145e8a234d995565d439027c89c5aa1da47c13b626c7d606" Jan 29 16:33:45 crc kubenswrapper[4886]: I0129 16:33:45.194228 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dstj_b415d17e-f329-40e7-8a3f-32881cb5347a/kube-multus/2.log" Jan 29 16:33:45 crc kubenswrapper[4886]: I0129 16:33:45.194705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dstj" event={"ID":"b415d17e-f329-40e7-8a3f-32881cb5347a","Type":"ContainerStarted","Data":"f9b217aab06574ff3e962be323a2a8a06c95f4a16fa9897a5196355d9fc68145"} Jan 29 16:33:48 crc kubenswrapper[4886]: E0129 16:33:48.626229 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:33:51 crc kubenswrapper[4886]: I0129 16:33:51.232920 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerStarted","Data":"d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4"} Jan 29 16:33:51 crc kubenswrapper[4886]: I0129 16:33:51.776460 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fm92b" Jan 29 16:33:52 crc kubenswrapper[4886]: I0129 16:33:52.900217 4886 generic.go:334] "Generic (PLEG): container finished" podID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerID="d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4" exitCode=0 Jan 29 16:33:52 crc kubenswrapper[4886]: I0129 16:33:52.900305 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerDied","Data":"d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4"} Jan 29 16:33:53 crc kubenswrapper[4886]: E0129 16:33:53.735076 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:33:53 crc kubenswrapper[4886]: E0129 16:33:53.735598 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn92n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkk68_openshift-marketplace(d84ce3e9-c41a-4a08-8d86-2a918d5e9450): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:53 crc kubenswrapper[4886]: E0129 16:33:53.736797 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:33:53 crc kubenswrapper[4886]: I0129 16:33:53.907213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerStarted","Data":"26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d"} Jan 29 16:33:53 crc kubenswrapper[4886]: I0129 16:33:53.926927 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qbl4" podStartSLOduration=2.240297018 podStartE2EDuration="5m47.92690452s" podCreationTimestamp="2026-01-29 16:28:06 +0000 UTC" firstStartedPulling="2026-01-29 16:28:07.715205598 +0000 UTC m=+370.623924870" lastFinishedPulling="2026-01-29 16:33:53.4018131 +0000 UTC m=+716.310532372" observedRunningTime="2026-01-29 16:33:53.923442004 +0000 UTC m=+716.832161276" watchObservedRunningTime="2026-01-29 16:33:53.92690452 +0000 UTC m=+716.835623812" Jan 29 16:33:55 crc kubenswrapper[4886]: I0129 16:33:55.931513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerStarted","Data":"35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88"} Jan 29 16:33:56 crc kubenswrapper[4886]: I0129 16:33:56.942809 4886 generic.go:334] "Generic (PLEG): container finished" podID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerID="35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88" exitCode=0 Jan 29 16:33:56 crc kubenswrapper[4886]: I0129 16:33:56.942866 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerDied","Data":"35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88"} Jan 29 16:33:57 crc kubenswrapper[4886]: I0129 16:33:57.016994 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:33:57 crc kubenswrapper[4886]: I0129 16:33:57.017067 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:33:57 crc kubenswrapper[4886]: I0129 16:33:57.087717 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:33:57 crc kubenswrapper[4886]: I0129 16:33:57.953314 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerStarted","Data":"efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e"} Jan 29 16:33:58 crc kubenswrapper[4886]: I0129 16:33:58.001527 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5hs7" podStartSLOduration=3.35713132 podStartE2EDuration="5m54.001507629s" podCreationTimestamp="2026-01-29 16:28:04 +0000 UTC" firstStartedPulling="2026-01-29 16:28:06.706720432 +0000 UTC m=+369.615439704" lastFinishedPulling="2026-01-29 16:33:57.351096731 +0000 UTC m=+720.259816013" observedRunningTime="2026-01-29 16:33:57.998030063 +0000 UTC m=+720.906749355" watchObservedRunningTime="2026-01-29 16:33:58.001507629 +0000 UTC m=+720.910226921" Jan 29 16:33:59 crc kubenswrapper[4886]: E0129 16:33:59.616685 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:34:05 crc kubenswrapper[4886]: I0129 16:34:05.192131 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:34:05 crc kubenswrapper[4886]: I0129 16:34:05.192439 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:34:05 crc kubenswrapper[4886]: I0129 16:34:05.249205 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:34:06 crc kubenswrapper[4886]: I0129 16:34:06.085686 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:34:07 crc kubenswrapper[4886]: I0129 16:34:07.069820 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:34:07 crc kubenswrapper[4886]: E0129 16:34:07.620783 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:34:12 crc kubenswrapper[4886]: E0129 16:34:12.617531 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:34:18 crc kubenswrapper[4886]: E0129 16:34:18.626121 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:34:25 crc kubenswrapper[4886]: E0129 16:34:25.618134 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:34:29 crc kubenswrapper[4886]: I0129 16:34:29.661088 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:34:29 crc kubenswrapper[4886]: I0129 16:34:29.661470 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:34:30 crc kubenswrapper[4886]: E0129 16:34:30.618480 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:34:38 crc kubenswrapper[4886]: E0129 16:34:38.623491 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:34:43 crc kubenswrapper[4886]: E0129 16:34:43.618121 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:34:48 crc kubenswrapper[4886]: I0129 16:34:48.195944 4886 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:34:52 crc kubenswrapper[4886]: E0129 16:34:52.618701 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:34:57 crc kubenswrapper[4886]: E0129 16:34:57.618116 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:34:59 crc kubenswrapper[4886]: I0129 16:34:59.661132 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:34:59 crc kubenswrapper[4886]: I0129 16:34:59.661224 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:35:04 crc kubenswrapper[4886]: E0129 16:35:04.618399 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:35:08 crc kubenswrapper[4886]: E0129 16:35:08.621257 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:35:18 crc kubenswrapper[4886]: E0129 16:35:18.621769 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:35:22 crc kubenswrapper[4886]: E0129 16:35:22.618683 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:35:29 crc kubenswrapper[4886]: I0129 16:35:29.660860 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:35:29 crc kubenswrapper[4886]: I0129 16:35:29.661734 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:35:29 crc kubenswrapper[4886]: I0129 16:35:29.661800 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:35:29 crc kubenswrapper[4886]: I0129 16:35:29.662773 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"773fe28c1c2f4b4e6b5a35ea611b7d8ab8f392d8f1b68bb09ec93e5c483b53ed"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:35:29 crc kubenswrapper[4886]: I0129 16:35:29.662863 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://773fe28c1c2f4b4e6b5a35ea611b7d8ab8f392d8f1b68bb09ec93e5c483b53ed" gracePeriod=600 Jan 29 16:35:30 crc kubenswrapper[4886]: E0129 16:35:30.616841 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:35:30 crc kubenswrapper[4886]: I0129 16:35:30.645235 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="773fe28c1c2f4b4e6b5a35ea611b7d8ab8f392d8f1b68bb09ec93e5c483b53ed" exitCode=0 Jan 29 16:35:30 crc kubenswrapper[4886]: I0129 16:35:30.645284 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"773fe28c1c2f4b4e6b5a35ea611b7d8ab8f392d8f1b68bb09ec93e5c483b53ed"} Jan 29 16:35:30 crc kubenswrapper[4886]: I0129 16:35:30.645347 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"50ba5c9bdbddc145f7d20c044a7cd326eb16e00aa141bfc3e8c4f610ef31ae97"} Jan 29 16:35:30 crc kubenswrapper[4886]: I0129 16:35:30.645365 4886 scope.go:117] "RemoveContainer" containerID="ae7876e7e5e026deccf52515d738eb4b775938bb13eef71ab45573508b57aaa0" Jan 29 16:35:34 crc kubenswrapper[4886]: E0129 16:35:34.618869 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.293060 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9tgx"] Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.296298 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.309448 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9tgx"] Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.385760 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-utilities\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.386065 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-catalog-content\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.386218 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96x9\" (UniqueName: \"kubernetes.io/projected/73caa1a0-803a-489b-925a-62f4c7d85295-kube-api-access-b96x9\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.488043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-utilities\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.488526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-catalog-content\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.488784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96x9\" (UniqueName: \"kubernetes.io/projected/73caa1a0-803a-489b-925a-62f4c7d85295-kube-api-access-b96x9\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.488809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-utilities\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.489455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-catalog-content\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.516177 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96x9\" (UniqueName: \"kubernetes.io/projected/73caa1a0-803a-489b-925a-62f4c7d85295-kube-api-access-b96x9\") pod \"redhat-marketplace-b9tgx\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.626600 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:35:40 crc kubenswrapper[4886]: I0129 16:35:40.855156 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9tgx"] Jan 29 16:35:41 crc kubenswrapper[4886]: I0129 16:35:41.730466 4886 generic.go:334] "Generic (PLEG): container finished" podID="73caa1a0-803a-489b-925a-62f4c7d85295" containerID="0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8" exitCode=0 Jan 29 16:35:41 crc kubenswrapper[4886]: I0129 16:35:41.730529 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9tgx" event={"ID":"73caa1a0-803a-489b-925a-62f4c7d85295","Type":"ContainerDied","Data":"0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8"} Jan 29 16:35:41 crc kubenswrapper[4886]: I0129 16:35:41.730591 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9tgx" event={"ID":"73caa1a0-803a-489b-925a-62f4c7d85295","Type":"ContainerStarted","Data":"6b3bcf1eb7b421af2b3c1dea2211d6c94e3f2fcb7c357bd69518e7c58f34f4f8"} Jan 29 16:35:41 crc kubenswrapper[4886]: E0129 16:35:41.881937 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:35:41 crc kubenswrapper[4886]: E0129 16:35:41.882646 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b96x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b9tgx_openshift-marketplace(73caa1a0-803a-489b-925a-62f4c7d85295): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:35:41 crc kubenswrapper[4886]: E0129 16:35:41.883992 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-b9tgx" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" Jan 29 16:35:42 crc kubenswrapper[4886]: E0129 16:35:42.742183 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b9tgx" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" Jan 29 16:35:43 crc kubenswrapper[4886]: E0129 16:35:43.616770 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:35:46 crc kubenswrapper[4886]: E0129 16:35:46.619544 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:35:53 crc kubenswrapper[4886]: I0129 16:35:53.620014 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:35:54 crc kubenswrapper[4886]: I0129 16:35:54.858953 4886 generic.go:334] "Generic (PLEG): container finished" podID="73caa1a0-803a-489b-925a-62f4c7d85295" containerID="c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843" exitCode=0 Jan 29 16:35:54 crc kubenswrapper[4886]: I0129 16:35:54.859066 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9tgx" event={"ID":"73caa1a0-803a-489b-925a-62f4c7d85295","Type":"ContainerDied","Data":"c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843"} Jan 29 16:35:55 crc kubenswrapper[4886]: I0129 16:35:55.871056 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9tgx" event={"ID":"73caa1a0-803a-489b-925a-62f4c7d85295","Type":"ContainerStarted","Data":"1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6"} Jan 29 16:35:55 crc kubenswrapper[4886]: I0129 16:35:55.897476 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9tgx" podStartSLOduration=2.353241804 podStartE2EDuration="15.897456928s" podCreationTimestamp="2026-01-29 16:35:40 +0000 UTC" firstStartedPulling="2026-01-29 16:35:41.732577828 +0000 UTC m=+824.641297130" lastFinishedPulling="2026-01-29 16:35:55.276792972 +0000 UTC m=+838.185512254" observedRunningTime="2026-01-29 16:35:55.895453483 +0000 UTC m=+838.804172775" watchObservedRunningTime="2026-01-29 16:35:55.897456928 +0000 UTC m=+838.806176200" Jan 29 16:35:56 crc kubenswrapper[4886]: E0129 16:35:56.616749 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:36:00 crc kubenswrapper[4886]: I0129 16:36:00.631181 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:36:00 crc kubenswrapper[4886]: I0129 16:36:00.632149 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:36:00 crc kubenswrapper[4886]: I0129 16:36:00.683450 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:36:00 crc kubenswrapper[4886]: I0129 16:36:00.945010 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:36:01 crc kubenswrapper[4886]: E0129 16:36:01.619301 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.068222 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9tgx"] Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.068507 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9tgx" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="registry-server" containerID="cri-o://1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6" gracePeriod=2 Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.471698 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.549261 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-utilities\") pod \"73caa1a0-803a-489b-925a-62f4c7d85295\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.549407 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-catalog-content\") pod \"73caa1a0-803a-489b-925a-62f4c7d85295\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.549499 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b96x9\" (UniqueName: \"kubernetes.io/projected/73caa1a0-803a-489b-925a-62f4c7d85295-kube-api-access-b96x9\") pod \"73caa1a0-803a-489b-925a-62f4c7d85295\" (UID: \"73caa1a0-803a-489b-925a-62f4c7d85295\") " Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.551464 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-utilities" (OuterVolumeSpecName: "utilities") pod "73caa1a0-803a-489b-925a-62f4c7d85295" (UID: "73caa1a0-803a-489b-925a-62f4c7d85295"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.555595 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73caa1a0-803a-489b-925a-62f4c7d85295-kube-api-access-b96x9" (OuterVolumeSpecName: "kube-api-access-b96x9") pod "73caa1a0-803a-489b-925a-62f4c7d85295" (UID: "73caa1a0-803a-489b-925a-62f4c7d85295"). InnerVolumeSpecName "kube-api-access-b96x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.589506 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73caa1a0-803a-489b-925a-62f4c7d85295" (UID: "73caa1a0-803a-489b-925a-62f4c7d85295"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.651737 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.651889 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b96x9\" (UniqueName: \"kubernetes.io/projected/73caa1a0-803a-489b-925a-62f4c7d85295-kube-api-access-b96x9\") on node \"crc\" DevicePath \"\"" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.652044 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73caa1a0-803a-489b-925a-62f4c7d85295-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.927628 4886 generic.go:334] "Generic (PLEG): container finished" podID="73caa1a0-803a-489b-925a-62f4c7d85295" containerID="1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6" exitCode=0 Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.927689 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9tgx" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.927700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9tgx" event={"ID":"73caa1a0-803a-489b-925a-62f4c7d85295","Type":"ContainerDied","Data":"1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6"} Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.927747 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9tgx" event={"ID":"73caa1a0-803a-489b-925a-62f4c7d85295","Type":"ContainerDied","Data":"6b3bcf1eb7b421af2b3c1dea2211d6c94e3f2fcb7c357bd69518e7c58f34f4f8"} Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.927765 4886 scope.go:117] "RemoveContainer" containerID="1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.962803 4886 scope.go:117] "RemoveContainer" containerID="c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843" Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.970272 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9tgx"] Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.977098 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9tgx"] Jan 29 16:36:03 crc kubenswrapper[4886]: I0129 16:36:03.995284 4886 scope.go:117] "RemoveContainer" containerID="0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.023710 4886 scope.go:117] "RemoveContainer" containerID="1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6" Jan 29 16:36:04 crc kubenswrapper[4886]: E0129 16:36:04.024259 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6\": container with ID starting with 1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6 not found: ID does not exist" containerID="1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.024392 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6"} err="failed to get container status \"1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6\": rpc error: code = NotFound desc = could not find container \"1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6\": container with ID starting with 1df3debc9dc32a464a8a01ceb66660fd934db40e0418a44b73501976b98cd6f6 not found: ID does not exist" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.024542 4886 scope.go:117] "RemoveContainer" containerID="c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843" Jan 29 16:36:04 crc kubenswrapper[4886]: E0129 16:36:04.025045 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843\": container with ID starting with c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843 not found: ID does not exist" containerID="c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.025081 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843"} err="failed to get container status \"c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843\": rpc error: code = NotFound desc = could not find container \"c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843\": container with ID starting with c0a3c283ef8d7e07ee977dc4f960790916999f5c601d1154dce01509fccc0843 not found: ID does not exist" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.025109 4886 scope.go:117] "RemoveContainer" containerID="0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8" Jan 29 16:36:04 crc kubenswrapper[4886]: E0129 16:36:04.025425 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8\": container with ID starting with 0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8 not found: ID does not exist" containerID="0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.025475 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8"} err="failed to get container status \"0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8\": rpc error: code = NotFound desc = could not find container \"0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8\": container with ID starting with 0af281ba22a48525a89d293814700da60b0038002508ae0fe09557b961c806e8 not found: ID does not exist" Jan 29 16:36:04 crc kubenswrapper[4886]: I0129 16:36:04.628574 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" path="/var/lib/kubelet/pods/73caa1a0-803a-489b-925a-62f4c7d85295/volumes" Jan 29 16:36:11 crc kubenswrapper[4886]: E0129 16:36:11.617454 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:36:14 crc kubenswrapper[4886]: E0129 16:36:14.617761 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:36:23 crc kubenswrapper[4886]: E0129 16:36:23.618508 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:36:29 crc kubenswrapper[4886]: E0129 16:36:29.618391 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:36:35 crc kubenswrapper[4886]: E0129 16:36:35.617439 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:36:43 crc kubenswrapper[4886]: E0129 16:36:43.669449 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:36:47 crc kubenswrapper[4886]: E0129 16:36:47.618809 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:36:54 crc kubenswrapper[4886]: E0129 16:36:54.617979 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:37:01 crc kubenswrapper[4886]: E0129 16:37:01.618066 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:37:08 crc kubenswrapper[4886]: E0129 16:37:08.619929 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:37:12 crc kubenswrapper[4886]: E0129 16:37:12.617414 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:37:20 crc kubenswrapper[4886]: E0129 16:37:20.618406 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:37:26 crc kubenswrapper[4886]: E0129 16:37:26.621001 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:37:29 crc kubenswrapper[4886]: I0129 16:37:29.661167 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:29 crc kubenswrapper[4886]: I0129 16:37:29.661830 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:31 crc kubenswrapper[4886]: E0129 16:37:31.617282 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:37:37 crc kubenswrapper[4886]: E0129 16:37:37.619726 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.895999 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kfgdj"] Jan 29 16:37:45 crc kubenswrapper[4886]: E0129 16:37:45.897059 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="extract-utilities" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.897085 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="extract-utilities" Jan 29 16:37:45 crc kubenswrapper[4886]: E0129 16:37:45.897116 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="registry-server" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.897130 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="registry-server" Jan 29 16:37:45 crc kubenswrapper[4886]: E0129 16:37:45.897156 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="extract-content" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.897171 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="extract-content" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.897516 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="73caa1a0-803a-489b-925a-62f4c7d85295" containerName="registry-server" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.899553 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:45 crc kubenswrapper[4886]: I0129 16:37:45.913679 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfgdj"] Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.036794 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-catalog-content\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.036855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-utilities\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.036887 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt878\" (UniqueName: \"kubernetes.io/projected/b4408259-440c-4434-ad5e-df143591092f-kube-api-access-qt878\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.138404 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-utilities\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.138536 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt878\" (UniqueName: \"kubernetes.io/projected/b4408259-440c-4434-ad5e-df143591092f-kube-api-access-qt878\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.138685 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-catalog-content\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.139664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-utilities\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.139695 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-catalog-content\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.172612 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt878\" (UniqueName: \"kubernetes.io/projected/b4408259-440c-4434-ad5e-df143591092f-kube-api-access-qt878\") pod \"certified-operators-kfgdj\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.231984 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.451094 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfgdj"] Jan 29 16:37:46 crc kubenswrapper[4886]: E0129 16:37:46.615850 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.746141 4886 generic.go:334] "Generic (PLEG): container finished" podID="b4408259-440c-4434-ad5e-df143591092f" containerID="d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7" exitCode=0 Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.746199 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfgdj" event={"ID":"b4408259-440c-4434-ad5e-df143591092f","Type":"ContainerDied","Data":"d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7"} Jan 29 16:37:46 crc kubenswrapper[4886]: I0129 16:37:46.746281 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfgdj" event={"ID":"b4408259-440c-4434-ad5e-df143591092f","Type":"ContainerStarted","Data":"65cd5bcca908b0b496f52d5ad6cc1abf1980809b3bca9141ebb7782171f5ef55"} Jan 29 16:37:48 crc kubenswrapper[4886]: I0129 16:37:48.765581 4886 generic.go:334] "Generic (PLEG): container finished" podID="b4408259-440c-4434-ad5e-df143591092f" containerID="4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a" exitCode=0 Jan 29 16:37:48 crc kubenswrapper[4886]: I0129 16:37:48.765646 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfgdj" event={"ID":"b4408259-440c-4434-ad5e-df143591092f","Type":"ContainerDied","Data":"4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a"} Jan 29 16:37:49 crc kubenswrapper[4886]: I0129 16:37:49.788560 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfgdj" event={"ID":"b4408259-440c-4434-ad5e-df143591092f","Type":"ContainerStarted","Data":"7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee"} Jan 29 16:37:49 crc kubenswrapper[4886]: I0129 16:37:49.815538 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kfgdj" podStartSLOduration=2.343172752 podStartE2EDuration="4.815515761s" podCreationTimestamp="2026-01-29 16:37:45 +0000 UTC" firstStartedPulling="2026-01-29 16:37:46.748088802 +0000 UTC m=+949.656808084" lastFinishedPulling="2026-01-29 16:37:49.220431821 +0000 UTC m=+952.129151093" observedRunningTime="2026-01-29 16:37:49.811788192 +0000 UTC m=+952.720507464" watchObservedRunningTime="2026-01-29 16:37:49.815515761 +0000 UTC m=+952.724235043" Jan 29 16:37:52 crc kubenswrapper[4886]: E0129 16:37:52.617959 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:37:56 crc kubenswrapper[4886]: I0129 16:37:56.232884 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:56 crc kubenswrapper[4886]: I0129 16:37:56.234666 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:56 crc kubenswrapper[4886]: I0129 16:37:56.302904 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:56 crc kubenswrapper[4886]: I0129 16:37:56.897699 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:58 crc kubenswrapper[4886]: E0129 16:37:58.623586 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:37:58 crc kubenswrapper[4886]: I0129 16:37:58.678040 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfgdj"] Jan 29 16:37:58 crc kubenswrapper[4886]: I0129 16:37:58.853005 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kfgdj" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="registry-server" containerID="cri-o://7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee" gracePeriod=2 Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.660843 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.660897 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.716102 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.845744 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-utilities\") pod \"b4408259-440c-4434-ad5e-df143591092f\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.845822 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-catalog-content\") pod \"b4408259-440c-4434-ad5e-df143591092f\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.845910 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt878\" (UniqueName: \"kubernetes.io/projected/b4408259-440c-4434-ad5e-df143591092f-kube-api-access-qt878\") pod \"b4408259-440c-4434-ad5e-df143591092f\" (UID: \"b4408259-440c-4434-ad5e-df143591092f\") " Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.847481 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-utilities" (OuterVolumeSpecName: "utilities") pod "b4408259-440c-4434-ad5e-df143591092f" (UID: "b4408259-440c-4434-ad5e-df143591092f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.852427 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4408259-440c-4434-ad5e-df143591092f-kube-api-access-qt878" (OuterVolumeSpecName: "kube-api-access-qt878") pod "b4408259-440c-4434-ad5e-df143591092f" (UID: "b4408259-440c-4434-ad5e-df143591092f"). InnerVolumeSpecName "kube-api-access-qt878". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.861581 4886 generic.go:334] "Generic (PLEG): container finished" podID="b4408259-440c-4434-ad5e-df143591092f" containerID="7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee" exitCode=0 Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.861632 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfgdj" event={"ID":"b4408259-440c-4434-ad5e-df143591092f","Type":"ContainerDied","Data":"7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee"} Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.861662 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfgdj" event={"ID":"b4408259-440c-4434-ad5e-df143591092f","Type":"ContainerDied","Data":"65cd5bcca908b0b496f52d5ad6cc1abf1980809b3bca9141ebb7782171f5ef55"} Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.861682 4886 scope.go:117] "RemoveContainer" containerID="7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.861680 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfgdj" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.903777 4886 scope.go:117] "RemoveContainer" containerID="4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.912433 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4408259-440c-4434-ad5e-df143591092f" (UID: "b4408259-440c-4434-ad5e-df143591092f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.919395 4886 scope.go:117] "RemoveContainer" containerID="d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.947632 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt878\" (UniqueName: \"kubernetes.io/projected/b4408259-440c-4434-ad5e-df143591092f-kube-api-access-qt878\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.947862 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.947952 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4408259-440c-4434-ad5e-df143591092f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.958201 4886 scope.go:117] "RemoveContainer" containerID="7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee" Jan 29 16:37:59 crc kubenswrapper[4886]: E0129 16:37:59.959016 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee\": container with ID starting with 7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee not found: ID does not exist" containerID="7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.959065 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee"} err="failed to get container status \"7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee\": rpc error: code = NotFound desc = could not find container \"7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee\": container with ID starting with 7f8799db03d44b9bc3afe805b7e6af24b1d2e2fc103b5b76d9aaef5455993dee not found: ID does not exist" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.959096 4886 scope.go:117] "RemoveContainer" containerID="4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a" Jan 29 16:37:59 crc kubenswrapper[4886]: E0129 16:37:59.959565 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a\": container with ID starting with 4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a not found: ID does not exist" containerID="4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.959649 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a"} err="failed to get container status \"4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a\": rpc error: code = NotFound desc = could not find container \"4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a\": container with ID starting with 4a8106271fae12af1142ac5ef147ed049a9212bff974e10331949896bfe2f22a not found: ID does not exist" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.959737 4886 scope.go:117] "RemoveContainer" containerID="d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7" Jan 29 16:37:59 crc kubenswrapper[4886]: E0129 16:37:59.960373 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7\": container with ID starting with d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7 not found: ID does not exist" containerID="d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7" Jan 29 16:37:59 crc kubenswrapper[4886]: I0129 16:37:59.960404 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7"} err="failed to get container status \"d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7\": rpc error: code = NotFound desc = could not find container \"d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7\": container with ID starting with d70d1fce763398b1fbc89d3ba02890b194f9bd437f727f0609064eb7a07084e7 not found: ID does not exist" Jan 29 16:38:00 crc kubenswrapper[4886]: I0129 16:38:00.206990 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfgdj"] Jan 29 16:38:00 crc kubenswrapper[4886]: I0129 16:38:00.211707 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kfgdj"] Jan 29 16:38:00 crc kubenswrapper[4886]: I0129 16:38:00.643540 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4408259-440c-4434-ad5e-df143591092f" path="/var/lib/kubelet/pods/b4408259-440c-4434-ad5e-df143591092f/volumes" Jan 29 16:38:03 crc kubenswrapper[4886]: E0129 16:38:03.617057 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.291346 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pzrc9"] Jan 29 16:38:05 crc kubenswrapper[4886]: E0129 16:38:05.291656 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="registry-server" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.291674 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="registry-server" Jan 29 16:38:05 crc kubenswrapper[4886]: E0129 16:38:05.291714 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="extract-utilities" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.291725 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="extract-utilities" Jan 29 16:38:05 crc kubenswrapper[4886]: E0129 16:38:05.291740 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="extract-content" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.291750 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="extract-content" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.291942 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4408259-440c-4434-ad5e-df143591092f" containerName="registry-server" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.293204 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.308220 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzrc9"] Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.430877 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfsqr\" (UniqueName: \"kubernetes.io/projected/92af1116-2260-4c2f-a3b2-b3045d51065e-kube-api-access-xfsqr\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.430939 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-utilities\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.430999 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-catalog-content\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.532972 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-catalog-content\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.533425 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfsqr\" (UniqueName: \"kubernetes.io/projected/92af1116-2260-4c2f-a3b2-b3045d51065e-kube-api-access-xfsqr\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.533494 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-utilities\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.533664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-catalog-content\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.534011 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-utilities\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.566662 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfsqr\" (UniqueName: \"kubernetes.io/projected/92af1116-2260-4c2f-a3b2-b3045d51065e-kube-api-access-xfsqr\") pod \"community-operators-pzrc9\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.613312 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.801596 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzrc9"] Jan 29 16:38:05 crc kubenswrapper[4886]: I0129 16:38:05.904575 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerStarted","Data":"a5aee8ffafba103be40b015f41604638130a97df1c4c358df1fd18e7cd77f933"} Jan 29 16:38:06 crc kubenswrapper[4886]: I0129 16:38:06.913908 4886 generic.go:334] "Generic (PLEG): container finished" podID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerID="03dc4d084e92a1a4c1b13b14dd33a72a0ff570323d3fe9be1d52ad1281c0cc68" exitCode=0 Jan 29 16:38:06 crc kubenswrapper[4886]: I0129 16:38:06.913972 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerDied","Data":"03dc4d084e92a1a4c1b13b14dd33a72a0ff570323d3fe9be1d52ad1281c0cc68"} Jan 29 16:38:07 crc kubenswrapper[4886]: I0129 16:38:07.923165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerStarted","Data":"e16b246f25ed8a9774fabcbf44fd890ca7a79123170eeee68579d3e84408cbde"} Jan 29 16:38:08 crc kubenswrapper[4886]: I0129 16:38:08.931270 4886 generic.go:334] "Generic (PLEG): container finished" podID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerID="e16b246f25ed8a9774fabcbf44fd890ca7a79123170eeee68579d3e84408cbde" exitCode=0 Jan 29 16:38:08 crc kubenswrapper[4886]: I0129 16:38:08.931487 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerDied","Data":"e16b246f25ed8a9774fabcbf44fd890ca7a79123170eeee68579d3e84408cbde"} Jan 29 16:38:09 crc kubenswrapper[4886]: I0129 16:38:09.940636 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerStarted","Data":"f830e2985c2ce12ffdec81f706a0ce0df2ec836503f1182ff628ddf87c45db60"} Jan 29 16:38:13 crc kubenswrapper[4886]: E0129 16:38:13.616845 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.682869 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pzrc9" podStartSLOduration=7.255194957 podStartE2EDuration="9.68285469s" podCreationTimestamp="2026-01-29 16:38:05 +0000 UTC" firstStartedPulling="2026-01-29 16:38:06.918105346 +0000 UTC m=+969.826824628" lastFinishedPulling="2026-01-29 16:38:09.345765089 +0000 UTC m=+972.254484361" observedRunningTime="2026-01-29 16:38:09.969938362 +0000 UTC m=+972.878657654" watchObservedRunningTime="2026-01-29 16:38:14.68285469 +0000 UTC m=+977.591573962" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.684766 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vgrfs"] Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.685890 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.710422 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vgrfs"] Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.795395 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-utilities\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.795764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpvl\" (UniqueName: \"kubernetes.io/projected/5bc856aa-a27b-4856-a888-7104df47cf30-kube-api-access-hxpvl\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.795941 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-catalog-content\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.897531 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-utilities\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.897612 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpvl\" (UniqueName: \"kubernetes.io/projected/5bc856aa-a27b-4856-a888-7104df47cf30-kube-api-access-hxpvl\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.897650 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-catalog-content\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.898105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-catalog-content\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.898403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-utilities\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:14 crc kubenswrapper[4886]: I0129 16:38:14.916716 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpvl\" (UniqueName: \"kubernetes.io/projected/5bc856aa-a27b-4856-a888-7104df47cf30-kube-api-access-hxpvl\") pod \"redhat-operators-vgrfs\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.012073 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.233766 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vgrfs"] Jan 29 16:38:15 crc kubenswrapper[4886]: W0129 16:38:15.246239 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bc856aa_a27b_4856_a888_7104df47cf30.slice/crio-642d1d75fa3c4399292c0700dad3ed1dd140aa358c860f9e89f06502f40c5255 WatchSource:0}: Error finding container 642d1d75fa3c4399292c0700dad3ed1dd140aa358c860f9e89f06502f40c5255: Status 404 returned error can't find the container with id 642d1d75fa3c4399292c0700dad3ed1dd140aa358c860f9e89f06502f40c5255 Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.613911 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.613971 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.659812 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.974301 4886 generic.go:334] "Generic (PLEG): container finished" podID="5bc856aa-a27b-4856-a888-7104df47cf30" containerID="f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a" exitCode=0 Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.974354 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgrfs" event={"ID":"5bc856aa-a27b-4856-a888-7104df47cf30","Type":"ContainerDied","Data":"f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a"} Jan 29 16:38:15 crc kubenswrapper[4886]: I0129 16:38:15.974397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgrfs" event={"ID":"5bc856aa-a27b-4856-a888-7104df47cf30","Type":"ContainerStarted","Data":"642d1d75fa3c4399292c0700dad3ed1dd140aa358c860f9e89f06502f40c5255"} Jan 29 16:38:16 crc kubenswrapper[4886]: I0129 16:38:16.018585 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:16 crc kubenswrapper[4886]: E0129 16:38:16.617189 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:38:17 crc kubenswrapper[4886]: I0129 16:38:17.988824 4886 generic.go:334] "Generic (PLEG): container finished" podID="5bc856aa-a27b-4856-a888-7104df47cf30" containerID="8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3" exitCode=0 Jan 29 16:38:17 crc kubenswrapper[4886]: I0129 16:38:17.988954 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgrfs" event={"ID":"5bc856aa-a27b-4856-a888-7104df47cf30","Type":"ContainerDied","Data":"8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3"} Jan 29 16:38:18 crc kubenswrapper[4886]: I0129 16:38:18.998788 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgrfs" event={"ID":"5bc856aa-a27b-4856-a888-7104df47cf30","Type":"ContainerStarted","Data":"b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396"} Jan 29 16:38:19 crc kubenswrapper[4886]: I0129 16:38:19.027948 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vgrfs" podStartSLOduration=2.537051581 podStartE2EDuration="5.027920308s" podCreationTimestamp="2026-01-29 16:38:14 +0000 UTC" firstStartedPulling="2026-01-29 16:38:15.975474954 +0000 UTC m=+978.884194226" lastFinishedPulling="2026-01-29 16:38:18.466343681 +0000 UTC m=+981.375062953" observedRunningTime="2026-01-29 16:38:19.027396003 +0000 UTC m=+981.936115325" watchObservedRunningTime="2026-01-29 16:38:19.027920308 +0000 UTC m=+981.936639610" Jan 29 16:38:19 crc kubenswrapper[4886]: I0129 16:38:19.277954 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzrc9"] Jan 29 16:38:19 crc kubenswrapper[4886]: I0129 16:38:19.278226 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pzrc9" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="registry-server" containerID="cri-o://f830e2985c2ce12ffdec81f706a0ce0df2ec836503f1182ff628ddf87c45db60" gracePeriod=2 Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.007727 4886 generic.go:334] "Generic (PLEG): container finished" podID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerID="f830e2985c2ce12ffdec81f706a0ce0df2ec836503f1182ff628ddf87c45db60" exitCode=0 Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.008443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerDied","Data":"f830e2985c2ce12ffdec81f706a0ce0df2ec836503f1182ff628ddf87c45db60"} Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.292110 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.374160 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-catalog-content\") pod \"92af1116-2260-4c2f-a3b2-b3045d51065e\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.433736 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92af1116-2260-4c2f-a3b2-b3045d51065e" (UID: "92af1116-2260-4c2f-a3b2-b3045d51065e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.475128 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfsqr\" (UniqueName: \"kubernetes.io/projected/92af1116-2260-4c2f-a3b2-b3045d51065e-kube-api-access-xfsqr\") pod \"92af1116-2260-4c2f-a3b2-b3045d51065e\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.475217 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-utilities\") pod \"92af1116-2260-4c2f-a3b2-b3045d51065e\" (UID: \"92af1116-2260-4c2f-a3b2-b3045d51065e\") " Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.475717 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.475898 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-utilities" (OuterVolumeSpecName: "utilities") pod "92af1116-2260-4c2f-a3b2-b3045d51065e" (UID: "92af1116-2260-4c2f-a3b2-b3045d51065e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.479806 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92af1116-2260-4c2f-a3b2-b3045d51065e-kube-api-access-xfsqr" (OuterVolumeSpecName: "kube-api-access-xfsqr") pod "92af1116-2260-4c2f-a3b2-b3045d51065e" (UID: "92af1116-2260-4c2f-a3b2-b3045d51065e"). InnerVolumeSpecName "kube-api-access-xfsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.577083 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfsqr\" (UniqueName: \"kubernetes.io/projected/92af1116-2260-4c2f-a3b2-b3045d51065e-kube-api-access-xfsqr\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:20 crc kubenswrapper[4886]: I0129 16:38:20.577130 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92af1116-2260-4c2f-a3b2-b3045d51065e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.017055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzrc9" event={"ID":"92af1116-2260-4c2f-a3b2-b3045d51065e","Type":"ContainerDied","Data":"a5aee8ffafba103be40b015f41604638130a97df1c4c358df1fd18e7cd77f933"} Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.017114 4886 scope.go:117] "RemoveContainer" containerID="f830e2985c2ce12ffdec81f706a0ce0df2ec836503f1182ff628ddf87c45db60" Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.017116 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzrc9" Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.040267 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzrc9"] Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.044366 4886 scope.go:117] "RemoveContainer" containerID="e16b246f25ed8a9774fabcbf44fd890ca7a79123170eeee68579d3e84408cbde" Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.045792 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pzrc9"] Jan 29 16:38:21 crc kubenswrapper[4886]: I0129 16:38:21.067849 4886 scope.go:117] "RemoveContainer" containerID="03dc4d084e92a1a4c1b13b14dd33a72a0ff570323d3fe9be1d52ad1281c0cc68" Jan 29 16:38:22 crc kubenswrapper[4886]: I0129 16:38:22.621949 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" path="/var/lib/kubelet/pods/92af1116-2260-4c2f-a3b2-b3045d51065e/volumes" Jan 29 16:38:25 crc kubenswrapper[4886]: I0129 16:38:25.012914 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:25 crc kubenswrapper[4886]: I0129 16:38:25.014502 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:25 crc kubenswrapper[4886]: I0129 16:38:25.051480 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:25 crc kubenswrapper[4886]: I0129 16:38:25.105141 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:26 crc kubenswrapper[4886]: E0129 16:38:26.617715 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:38:27 crc kubenswrapper[4886]: I0129 16:38:27.676266 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vgrfs"] Jan 29 16:38:27 crc kubenswrapper[4886]: I0129 16:38:27.676739 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vgrfs" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="registry-server" containerID="cri-o://b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396" gracePeriod=2 Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.032886 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.069050 4886 generic.go:334] "Generic (PLEG): container finished" podID="5bc856aa-a27b-4856-a888-7104df47cf30" containerID="b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396" exitCode=0 Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.069106 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgrfs" event={"ID":"5bc856aa-a27b-4856-a888-7104df47cf30","Type":"ContainerDied","Data":"b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396"} Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.069139 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgrfs" event={"ID":"5bc856aa-a27b-4856-a888-7104df47cf30","Type":"ContainerDied","Data":"642d1d75fa3c4399292c0700dad3ed1dd140aa358c860f9e89f06502f40c5255"} Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.069160 4886 scope.go:117] "RemoveContainer" containerID="b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.069312 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgrfs" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.087478 4886 scope.go:117] "RemoveContainer" containerID="8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.103017 4886 scope.go:117] "RemoveContainer" containerID="f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.107280 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxpvl\" (UniqueName: \"kubernetes.io/projected/5bc856aa-a27b-4856-a888-7104df47cf30-kube-api-access-hxpvl\") pod \"5bc856aa-a27b-4856-a888-7104df47cf30\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.107441 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-utilities\") pod \"5bc856aa-a27b-4856-a888-7104df47cf30\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.107481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-catalog-content\") pod \"5bc856aa-a27b-4856-a888-7104df47cf30\" (UID: \"5bc856aa-a27b-4856-a888-7104df47cf30\") " Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.112278 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc856aa-a27b-4856-a888-7104df47cf30-kube-api-access-hxpvl" (OuterVolumeSpecName: "kube-api-access-hxpvl") pod "5bc856aa-a27b-4856-a888-7104df47cf30" (UID: "5bc856aa-a27b-4856-a888-7104df47cf30"). InnerVolumeSpecName "kube-api-access-hxpvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.113165 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-utilities" (OuterVolumeSpecName: "utilities") pod "5bc856aa-a27b-4856-a888-7104df47cf30" (UID: "5bc856aa-a27b-4856-a888-7104df47cf30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.150021 4886 scope.go:117] "RemoveContainer" containerID="b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396" Jan 29 16:38:28 crc kubenswrapper[4886]: E0129 16:38:28.150559 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396\": container with ID starting with b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396 not found: ID does not exist" containerID="b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.150656 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396"} err="failed to get container status \"b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396\": rpc error: code = NotFound desc = could not find container \"b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396\": container with ID starting with b953f483b87cc6eb1e353b30cfe440976c5d7b9acaa026806d5e31600d81f396 not found: ID does not exist" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.150694 4886 scope.go:117] "RemoveContainer" containerID="8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3" Jan 29 16:38:28 crc kubenswrapper[4886]: E0129 16:38:28.151010 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3\": container with ID starting with 8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3 not found: ID does not exist" containerID="8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.151091 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3"} err="failed to get container status \"8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3\": rpc error: code = NotFound desc = could not find container \"8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3\": container with ID starting with 8d0b96bb16d9b428b30612fd0b938c1d4924a8676d599e2c175e0ae963ed72f3 not found: ID does not exist" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.151117 4886 scope.go:117] "RemoveContainer" containerID="f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a" Jan 29 16:38:28 crc kubenswrapper[4886]: E0129 16:38:28.151411 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a\": container with ID starting with f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a not found: ID does not exist" containerID="f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.151443 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a"} err="failed to get container status \"f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a\": rpc error: code = NotFound desc = could not find container \"f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a\": container with ID starting with f321a5b5711c742d2c9335f082716b7a364f071a6e0ed342cb01bfcdaf92884a not found: ID does not exist" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.208858 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.208898 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxpvl\" (UniqueName: \"kubernetes.io/projected/5bc856aa-a27b-4856-a888-7104df47cf30-kube-api-access-hxpvl\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.254402 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bc856aa-a27b-4856-a888-7104df47cf30" (UID: "5bc856aa-a27b-4856-a888-7104df47cf30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.310212 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc856aa-a27b-4856-a888-7104df47cf30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.401668 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vgrfs"] Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.411877 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vgrfs"] Jan 29 16:38:28 crc kubenswrapper[4886]: E0129 16:38:28.618978 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" Jan 29 16:38:28 crc kubenswrapper[4886]: I0129 16:38:28.627220 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" path="/var/lib/kubelet/pods/5bc856aa-a27b-4856-a888-7104df47cf30/volumes" Jan 29 16:38:29 crc kubenswrapper[4886]: I0129 16:38:29.662828 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:38:29 crc kubenswrapper[4886]: I0129 16:38:29.662929 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:38:29 crc kubenswrapper[4886]: I0129 16:38:29.662978 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:38:29 crc kubenswrapper[4886]: I0129 16:38:29.663618 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50ba5c9bdbddc145f7d20c044a7cd326eb16e00aa141bfc3e8c4f610ef31ae97"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:38:29 crc kubenswrapper[4886]: I0129 16:38:29.663665 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://50ba5c9bdbddc145f7d20c044a7cd326eb16e00aa141bfc3e8c4f610ef31ae97" gracePeriod=600 Jan 29 16:38:30 crc kubenswrapper[4886]: I0129 16:38:30.088120 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="50ba5c9bdbddc145f7d20c044a7cd326eb16e00aa141bfc3e8c4f610ef31ae97" exitCode=0 Jan 29 16:38:30 crc kubenswrapper[4886]: I0129 16:38:30.088193 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"50ba5c9bdbddc145f7d20c044a7cd326eb16e00aa141bfc3e8c4f610ef31ae97"} Jan 29 16:38:30 crc kubenswrapper[4886]: I0129 16:38:30.088432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"84a645b31233e6f6691e7af3a8d18c33f1db7629388f3007d7e51e43f9f65e97"} Jan 29 16:38:30 crc kubenswrapper[4886]: I0129 16:38:30.088461 4886 scope.go:117] "RemoveContainer" containerID="773fe28c1c2f4b4e6b5a35ea611b7d8ab8f392d8f1b68bb09ec93e5c483b53ed" Jan 29 16:38:38 crc kubenswrapper[4886]: E0129 16:38:38.624215 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:38:42 crc kubenswrapper[4886]: I0129 16:38:42.204359 4886 generic.go:334] "Generic (PLEG): container finished" podID="69003a39-1c09-4087-a494-ebfd69e973cf" containerID="9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f" exitCode=0 Jan 29 16:38:42 crc kubenswrapper[4886]: I0129 16:38:42.204911 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfv6k" event={"ID":"69003a39-1c09-4087-a494-ebfd69e973cf","Type":"ContainerDied","Data":"9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f"} Jan 29 16:38:43 crc kubenswrapper[4886]: I0129 16:38:43.215757 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfv6k" event={"ID":"69003a39-1c09-4087-a494-ebfd69e973cf","Type":"ContainerStarted","Data":"735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef"} Jan 29 16:38:43 crc kubenswrapper[4886]: I0129 16:38:43.240050 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfv6k" podStartSLOduration=2.266022337 podStartE2EDuration="10m39.240024748s" podCreationTimestamp="2026-01-29 16:28:04 +0000 UTC" firstStartedPulling="2026-01-29 16:28:05.688495947 +0000 UTC m=+368.597215219" lastFinishedPulling="2026-01-29 16:38:42.662498318 +0000 UTC m=+1005.571217630" observedRunningTime="2026-01-29 16:38:43.235379803 +0000 UTC m=+1006.144099085" watchObservedRunningTime="2026-01-29 16:38:43.240024748 +0000 UTC m=+1006.148744030" Jan 29 16:38:44 crc kubenswrapper[4886]: I0129 16:38:44.648711 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:38:44 crc kubenswrapper[4886]: I0129 16:38:44.648796 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:38:44 crc kubenswrapper[4886]: I0129 16:38:44.708790 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:38:49 crc kubenswrapper[4886]: E0129 16:38:49.621556 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" Jan 29 16:38:54 crc kubenswrapper[4886]: I0129 16:38:54.719254 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:39:02 crc kubenswrapper[4886]: I0129 16:39:02.351803 4886 generic.go:334] "Generic (PLEG): container finished" podID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerID="0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c" exitCode=0 Jan 29 16:39:02 crc kubenswrapper[4886]: I0129 16:39:02.351858 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkk68" event={"ID":"d84ce3e9-c41a-4a08-8d86-2a918d5e9450","Type":"ContainerDied","Data":"0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c"} Jan 29 16:39:03 crc kubenswrapper[4886]: I0129 16:39:03.362425 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkk68" event={"ID":"d84ce3e9-c41a-4a08-8d86-2a918d5e9450","Type":"ContainerStarted","Data":"29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990"} Jan 29 16:39:03 crc kubenswrapper[4886]: I0129 16:39:03.383754 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zkk68" podStartSLOduration=2.377020432 podStartE2EDuration="10m56.383732099s" podCreationTimestamp="2026-01-29 16:28:07 +0000 UTC" firstStartedPulling="2026-01-29 16:28:08.721120306 +0000 UTC m=+371.629839588" lastFinishedPulling="2026-01-29 16:39:02.727831983 +0000 UTC m=+1025.636551255" observedRunningTime="2026-01-29 16:39:03.382476232 +0000 UTC m=+1026.291195544" watchObservedRunningTime="2026-01-29 16:39:03.383732099 +0000 UTC m=+1026.292451371" Jan 29 16:39:07 crc kubenswrapper[4886]: I0129 16:39:07.583296 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:39:07 crc kubenswrapper[4886]: I0129 16:39:07.583762 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:39:08 crc kubenswrapper[4886]: I0129 16:39:08.635834 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="registry-server" probeResult="failure" output=< Jan 29 16:39:08 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 16:39:08 crc kubenswrapper[4886]: > Jan 29 16:39:17 crc kubenswrapper[4886]: I0129 16:39:17.653404 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:39:17 crc kubenswrapper[4886]: I0129 16:39:17.777872 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:40:59 crc kubenswrapper[4886]: I0129 16:40:59.661245 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:40:59 crc kubenswrapper[4886]: I0129 16:40:59.662519 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:29 crc kubenswrapper[4886]: I0129 16:41:29.661212 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:41:29 crc kubenswrapper[4886]: I0129 16:41:29.662023 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:59 crc kubenswrapper[4886]: I0129 16:41:59.661591 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:41:59 crc kubenswrapper[4886]: I0129 16:41:59.662171 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:59 crc kubenswrapper[4886]: I0129 16:41:59.662266 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:41:59 crc kubenswrapper[4886]: I0129 16:41:59.663023 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84a645b31233e6f6691e7af3a8d18c33f1db7629388f3007d7e51e43f9f65e97"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:41:59 crc kubenswrapper[4886]: I0129 16:41:59.663114 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://84a645b31233e6f6691e7af3a8d18c33f1db7629388f3007d7e51e43f9f65e97" gracePeriod=600 Jan 29 16:42:00 crc kubenswrapper[4886]: I0129 16:42:00.619604 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="84a645b31233e6f6691e7af3a8d18c33f1db7629388f3007d7e51e43f9f65e97" exitCode=0 Jan 29 16:42:00 crc kubenswrapper[4886]: I0129 16:42:00.626047 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"84a645b31233e6f6691e7af3a8d18c33f1db7629388f3007d7e51e43f9f65e97"} Jan 29 16:42:00 crc kubenswrapper[4886]: I0129 16:42:00.626121 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"e07342110c4b02787cb4723c63fa377397be4b574d1be34193ab1f7b4cebac54"} Jan 29 16:42:00 crc kubenswrapper[4886]: I0129 16:42:00.626152 4886 scope.go:117] "RemoveContainer" containerID="50ba5c9bdbddc145f7d20c044a7cd326eb16e00aa141bfc3e8c4f610ef31ae97" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.010147 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz"] Jan 29 16:42:10 crc kubenswrapper[4886]: E0129 16:42:10.011074 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="registry-server" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011090 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="registry-server" Jan 29 16:42:10 crc kubenswrapper[4886]: E0129 16:42:10.011103 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="extract-utilities" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011112 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="extract-utilities" Jan 29 16:42:10 crc kubenswrapper[4886]: E0129 16:42:10.011132 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="extract-utilities" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011140 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="extract-utilities" Jan 29 16:42:10 crc kubenswrapper[4886]: E0129 16:42:10.011153 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="registry-server" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011160 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="registry-server" Jan 29 16:42:10 crc kubenswrapper[4886]: E0129 16:42:10.011172 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="extract-content" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011180 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="extract-content" Jan 29 16:42:10 crc kubenswrapper[4886]: E0129 16:42:10.011200 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="extract-content" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011207 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="extract-content" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011372 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc856aa-a27b-4856-a888-7104df47cf30" containerName="registry-server" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.011394 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="92af1116-2260-4c2f-a3b2-b3045d51065e" containerName="registry-server" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.012396 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.015103 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.023128 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz"] Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.160553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.160656 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.160886 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzmn\" (UniqueName: \"kubernetes.io/projected/20a67e3b-3393-4dea-81c8-42c2e22ad315-kube-api-access-lxzmn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.262130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.262240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxzmn\" (UniqueName: \"kubernetes.io/projected/20a67e3b-3393-4dea-81c8-42c2e22ad315-kube-api-access-lxzmn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.262463 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.262657 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.263185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.289697 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxzmn\" (UniqueName: \"kubernetes.io/projected/20a67e3b-3393-4dea-81c8-42c2e22ad315-kube-api-access-lxzmn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.339517 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:10 crc kubenswrapper[4886]: I0129 16:42:10.778923 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz"] Jan 29 16:42:11 crc kubenswrapper[4886]: I0129 16:42:11.702505 4886 generic.go:334] "Generic (PLEG): container finished" podID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerID="5d883c5a30d8f4bbb039e6aaa651b8e09e6b2a8064244a25c33a761d3d8863ae" exitCode=0 Jan 29 16:42:11 crc kubenswrapper[4886]: I0129 16:42:11.702563 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" event={"ID":"20a67e3b-3393-4dea-81c8-42c2e22ad315","Type":"ContainerDied","Data":"5d883c5a30d8f4bbb039e6aaa651b8e09e6b2a8064244a25c33a761d3d8863ae"} Jan 29 16:42:11 crc kubenswrapper[4886]: I0129 16:42:11.702928 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" event={"ID":"20a67e3b-3393-4dea-81c8-42c2e22ad315","Type":"ContainerStarted","Data":"976f1abd45dc9a03c85afaf2f393d899a8fe7d61004333b35e039ff0d753b2d4"} Jan 29 16:42:11 crc kubenswrapper[4886]: I0129 16:42:11.706425 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:42:13 crc kubenswrapper[4886]: I0129 16:42:13.726605 4886 generic.go:334] "Generic (PLEG): container finished" podID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerID="33b121937df6965f1e7c4b97eec963e1caa986d708bab7e6baf54e700c6b9a38" exitCode=0 Jan 29 16:42:13 crc kubenswrapper[4886]: I0129 16:42:13.726678 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" event={"ID":"20a67e3b-3393-4dea-81c8-42c2e22ad315","Type":"ContainerDied","Data":"33b121937df6965f1e7c4b97eec963e1caa986d708bab7e6baf54e700c6b9a38"} Jan 29 16:42:14 crc kubenswrapper[4886]: I0129 16:42:14.736230 4886 generic.go:334] "Generic (PLEG): container finished" podID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerID="f97710e37d132101bc18cdd88c6b7f51c7d65099d23a9fcf1887c1bba9f84a3e" exitCode=0 Jan 29 16:42:14 crc kubenswrapper[4886]: I0129 16:42:14.736359 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" event={"ID":"20a67e3b-3393-4dea-81c8-42c2e22ad315","Type":"ContainerDied","Data":"f97710e37d132101bc18cdd88c6b7f51c7d65099d23a9fcf1887c1bba9f84a3e"} Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.015190 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.156437 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-bundle\") pod \"20a67e3b-3393-4dea-81c8-42c2e22ad315\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.156486 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxzmn\" (UniqueName: \"kubernetes.io/projected/20a67e3b-3393-4dea-81c8-42c2e22ad315-kube-api-access-lxzmn\") pod \"20a67e3b-3393-4dea-81c8-42c2e22ad315\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.156518 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-util\") pod \"20a67e3b-3393-4dea-81c8-42c2e22ad315\" (UID: \"20a67e3b-3393-4dea-81c8-42c2e22ad315\") " Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.162313 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-bundle" (OuterVolumeSpecName: "bundle") pod "20a67e3b-3393-4dea-81c8-42c2e22ad315" (UID: "20a67e3b-3393-4dea-81c8-42c2e22ad315"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.169219 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a67e3b-3393-4dea-81c8-42c2e22ad315-kube-api-access-lxzmn" (OuterVolumeSpecName: "kube-api-access-lxzmn") pod "20a67e3b-3393-4dea-81c8-42c2e22ad315" (UID: "20a67e3b-3393-4dea-81c8-42c2e22ad315"). InnerVolumeSpecName "kube-api-access-lxzmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.187029 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-util" (OuterVolumeSpecName: "util") pod "20a67e3b-3393-4dea-81c8-42c2e22ad315" (UID: "20a67e3b-3393-4dea-81c8-42c2e22ad315"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.258293 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.258353 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxzmn\" (UniqueName: \"kubernetes.io/projected/20a67e3b-3393-4dea-81c8-42c2e22ad315-kube-api-access-lxzmn\") on node \"crc\" DevicePath \"\"" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.258366 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a67e3b-3393-4dea-81c8-42c2e22ad315-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.751924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" event={"ID":"20a67e3b-3393-4dea-81c8-42c2e22ad315","Type":"ContainerDied","Data":"976f1abd45dc9a03c85afaf2f393d899a8fe7d61004333b35e039ff0d753b2d4"} Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.751981 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976f1abd45dc9a03c85afaf2f393d899a8fe7d61004333b35e039ff0d753b2d4" Jan 29 16:42:16 crc kubenswrapper[4886]: I0129 16:42:16.751989 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.921683 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z"] Jan 29 16:42:28 crc kubenswrapper[4886]: E0129 16:42:28.922504 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="util" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.922522 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="util" Jan 29 16:42:28 crc kubenswrapper[4886]: E0129 16:42:28.922540 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="pull" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.922549 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="pull" Jan 29 16:42:28 crc kubenswrapper[4886]: E0129 16:42:28.922569 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="extract" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.922578 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="extract" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.922702 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" containerName="extract" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.923206 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.934264 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.934914 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.934981 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-87x2p" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.940887 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z"] Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.979123 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5"] Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.990152 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9"] Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.990840 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.991151 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:28 crc kubenswrapper[4886]: I0129 16:42:28.995152 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.000428 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.001142 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kqkdx" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.006045 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.036039 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpmf\" (UniqueName: \"kubernetes.io/projected/1151b336-be43-4e43-959d-463c956e9bc4-kube-api-access-pxpmf\") pod \"obo-prometheus-operator-68bc856cb9-72k5z\" (UID: \"1151b336-be43-4e43-959d-463c956e9bc4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.137457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e7310d-6390-4a0d-b0bd-f8467c80517c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9\" (UID: \"e2e7310d-6390-4a0d-b0bd-f8467c80517c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.137517 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1472730-ce1e-4333-a6c6-930196b9d257-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5\" (UID: \"e1472730-ce1e-4333-a6c6-930196b9d257\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.137574 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpmf\" (UniqueName: \"kubernetes.io/projected/1151b336-be43-4e43-959d-463c956e9bc4-kube-api-access-pxpmf\") pod \"obo-prometheus-operator-68bc856cb9-72k5z\" (UID: \"1151b336-be43-4e43-959d-463c956e9bc4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.137589 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1472730-ce1e-4333-a6c6-930196b9d257-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5\" (UID: \"e1472730-ce1e-4333-a6c6-930196b9d257\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.137611 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e7310d-6390-4a0d-b0bd-f8467c80517c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9\" (UID: \"e2e7310d-6390-4a0d-b0bd-f8467c80517c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.170472 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpmf\" (UniqueName: \"kubernetes.io/projected/1151b336-be43-4e43-959d-463c956e9bc4-kube-api-access-pxpmf\") pod \"obo-prometheus-operator-68bc856cb9-72k5z\" (UID: \"1151b336-be43-4e43-959d-463c956e9bc4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.173707 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w5qml"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.174493 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.176852 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.179849 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-qx7cn" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.211532 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w5qml"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.238704 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1472730-ce1e-4333-a6c6-930196b9d257-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5\" (UID: \"e1472730-ce1e-4333-a6c6-930196b9d257\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.238751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e7310d-6390-4a0d-b0bd-f8467c80517c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9\" (UID: \"e2e7310d-6390-4a0d-b0bd-f8467c80517c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.238802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e7310d-6390-4a0d-b0bd-f8467c80517c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9\" (UID: \"e2e7310d-6390-4a0d-b0bd-f8467c80517c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.238831 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1472730-ce1e-4333-a6c6-930196b9d257-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5\" (UID: \"e1472730-ce1e-4333-a6c6-930196b9d257\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.241585 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.242792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1472730-ce1e-4333-a6c6-930196b9d257-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5\" (UID: \"e1472730-ce1e-4333-a6c6-930196b9d257\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.242944 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1472730-ce1e-4333-a6c6-930196b9d257-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5\" (UID: \"e1472730-ce1e-4333-a6c6-930196b9d257\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.243932 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2e7310d-6390-4a0d-b0bd-f8467c80517c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9\" (UID: \"e2e7310d-6390-4a0d-b0bd-f8467c80517c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.259783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2e7310d-6390-4a0d-b0bd-f8467c80517c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9\" (UID: \"e2e7310d-6390-4a0d-b0bd-f8467c80517c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.312767 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.324465 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.343500 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nxm\" (UniqueName: \"kubernetes.io/projected/17549a68-0567-40f8-9dda-37cd61f71b94-kube-api-access-59nxm\") pod \"observability-operator-59bdc8b94-w5qml\" (UID: \"17549a68-0567-40f8-9dda-37cd61f71b94\") " pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.343672 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/17549a68-0567-40f8-9dda-37cd61f71b94-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w5qml\" (UID: \"17549a68-0567-40f8-9dda-37cd61f71b94\") " pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.352137 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-dtcpm"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.352976 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.366394 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-pmhdg" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.375225 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-dtcpm"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.445380 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/17549a68-0567-40f8-9dda-37cd61f71b94-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w5qml\" (UID: \"17549a68-0567-40f8-9dda-37cd61f71b94\") " pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.445445 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59nxm\" (UniqueName: \"kubernetes.io/projected/17549a68-0567-40f8-9dda-37cd61f71b94-kube-api-access-59nxm\") pod \"observability-operator-59bdc8b94-w5qml\" (UID: \"17549a68-0567-40f8-9dda-37cd61f71b94\") " pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.453755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/17549a68-0567-40f8-9dda-37cd61f71b94-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w5qml\" (UID: \"17549a68-0567-40f8-9dda-37cd61f71b94\") " pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.467610 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59nxm\" (UniqueName: \"kubernetes.io/projected/17549a68-0567-40f8-9dda-37cd61f71b94-kube-api-access-59nxm\") pod \"observability-operator-59bdc8b94-w5qml\" (UID: \"17549a68-0567-40f8-9dda-37cd61f71b94\") " pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.504833 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.547743 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58sl\" (UniqueName: \"kubernetes.io/projected/d2a26d31-689d-4052-9df2-1654feb68c2d-kube-api-access-q58sl\") pod \"perses-operator-5bf474d74f-dtcpm\" (UID: \"d2a26d31-689d-4052-9df2-1654feb68c2d\") " pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.547864 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a26d31-689d-4052-9df2-1654feb68c2d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-dtcpm\" (UID: \"d2a26d31-689d-4052-9df2-1654feb68c2d\") " pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.599406 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z"] Jan 29 16:42:29 crc kubenswrapper[4886]: W0129 16:42:29.622654 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1151b336_be43_4e43_959d_463c956e9bc4.slice/crio-d59ae2b4bf608b7fa1b68d986522ef7dbaaad1a9d834a7636f0a9fc4f8df6c56 WatchSource:0}: Error finding container d59ae2b4bf608b7fa1b68d986522ef7dbaaad1a9d834a7636f0a9fc4f8df6c56: Status 404 returned error can't find the container with id d59ae2b4bf608b7fa1b68d986522ef7dbaaad1a9d834a7636f0a9fc4f8df6c56 Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.649719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q58sl\" (UniqueName: \"kubernetes.io/projected/d2a26d31-689d-4052-9df2-1654feb68c2d-kube-api-access-q58sl\") pod \"perses-operator-5bf474d74f-dtcpm\" (UID: \"d2a26d31-689d-4052-9df2-1654feb68c2d\") " pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.649821 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a26d31-689d-4052-9df2-1654feb68c2d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-dtcpm\" (UID: \"d2a26d31-689d-4052-9df2-1654feb68c2d\") " pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.651137 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a26d31-689d-4052-9df2-1654feb68c2d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-dtcpm\" (UID: \"d2a26d31-689d-4052-9df2-1654feb68c2d\") " pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.660086 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9"] Jan 29 16:42:29 crc kubenswrapper[4886]: W0129 16:42:29.665653 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e7310d_6390_4a0d_b0bd_f8467c80517c.slice/crio-c3a2aed72fa7cf38cac1b034388741047a39f6194ebba489f12e0f20f05d7e1a WatchSource:0}: Error finding container c3a2aed72fa7cf38cac1b034388741047a39f6194ebba489f12e0f20f05d7e1a: Status 404 returned error can't find the container with id c3a2aed72fa7cf38cac1b034388741047a39f6194ebba489f12e0f20f05d7e1a Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.676065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q58sl\" (UniqueName: \"kubernetes.io/projected/d2a26d31-689d-4052-9df2-1654feb68c2d-kube-api-access-q58sl\") pod \"perses-operator-5bf474d74f-dtcpm\" (UID: \"d2a26d31-689d-4052-9df2-1654feb68c2d\") " pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.679067 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.701478 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5"] Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.826570 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" event={"ID":"e2e7310d-6390-4a0d-b0bd-f8467c80517c","Type":"ContainerStarted","Data":"c3a2aed72fa7cf38cac1b034388741047a39f6194ebba489f12e0f20f05d7e1a"} Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.827314 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" event={"ID":"e1472730-ce1e-4333-a6c6-930196b9d257","Type":"ContainerStarted","Data":"c8873af3ed6924f4ee99c1c3a5b3b1fe51732f9684c5fb5f30fd703ee439948d"} Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.828227 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" event={"ID":"1151b336-be43-4e43-959d-463c956e9bc4","Type":"ContainerStarted","Data":"d59ae2b4bf608b7fa1b68d986522ef7dbaaad1a9d834a7636f0a9fc4f8df6c56"} Jan 29 16:42:29 crc kubenswrapper[4886]: I0129 16:42:29.938674 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-dtcpm"] Jan 29 16:42:29 crc kubenswrapper[4886]: W0129 16:42:29.941757 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a26d31_689d_4052_9df2_1654feb68c2d.slice/crio-28badb714870bb63b729974e5f3d38902243caabbf53e05f6a009feeb7a0b316 WatchSource:0}: Error finding container 28badb714870bb63b729974e5f3d38902243caabbf53e05f6a009feeb7a0b316: Status 404 returned error can't find the container with id 28badb714870bb63b729974e5f3d38902243caabbf53e05f6a009feeb7a0b316 Jan 29 16:42:30 crc kubenswrapper[4886]: I0129 16:42:30.006304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w5qml"] Jan 29 16:42:30 crc kubenswrapper[4886]: I0129 16:42:30.834856 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" event={"ID":"17549a68-0567-40f8-9dda-37cd61f71b94","Type":"ContainerStarted","Data":"f28a277f08071599754e25d38da40d646af2c0915c4cc3ecfb76416f18ac3e77"} Jan 29 16:42:30 crc kubenswrapper[4886]: I0129 16:42:30.836231 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" event={"ID":"d2a26d31-689d-4052-9df2-1654feb68c2d","Type":"ContainerStarted","Data":"28badb714870bb63b729974e5f3d38902243caabbf53e05f6a009feeb7a0b316"} Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.923175 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" event={"ID":"17549a68-0567-40f8-9dda-37cd61f71b94","Type":"ContainerStarted","Data":"5f251584cd4a72392bf82fcbaac03e86f9d34fedf3e60b93b7d5cf1e7fb50a29"} Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.923833 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.925459 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.925486 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" event={"ID":"d2a26d31-689d-4052-9df2-1654feb68c2d","Type":"ContainerStarted","Data":"9992d8e0634ed981ff9fd7bc0427ba554332b02075e523f4c92a45ceda3b6d32"} Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.925631 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.926783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" event={"ID":"1151b336-be43-4e43-959d-463c956e9bc4","Type":"ContainerStarted","Data":"a9babc6a5fe0ba78e4b2020f1e2034d16a2615aa4af5bb2a69984dd3ca27c70b"} Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.928148 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" event={"ID":"e2e7310d-6390-4a0d-b0bd-f8467c80517c","Type":"ContainerStarted","Data":"807ec1adae81c9e16b2e9afbb7d38b63e30bf1a658ddb7cef971234e3f58eeaa"} Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.929729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" event={"ID":"e1472730-ce1e-4333-a6c6-930196b9d257","Type":"ContainerStarted","Data":"9de3db81fb223377845988b8fe8a70e61eee46c753b8d2742e232ec42c7c4d5c"} Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.949517 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-w5qml" podStartSLOduration=1.778014539 podStartE2EDuration="15.949504031s" podCreationTimestamp="2026-01-29 16:42:29 +0000 UTC" firstStartedPulling="2026-01-29 16:42:30.007827503 +0000 UTC m=+1232.916546775" lastFinishedPulling="2026-01-29 16:42:44.179316995 +0000 UTC m=+1247.088036267" observedRunningTime="2026-01-29 16:42:44.947175275 +0000 UTC m=+1247.855894557" watchObservedRunningTime="2026-01-29 16:42:44.949504031 +0000 UTC m=+1247.858223303" Jan 29 16:42:44 crc kubenswrapper[4886]: I0129 16:42:44.978806 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9" podStartSLOduration=2.579788996 podStartE2EDuration="16.9787837s" podCreationTimestamp="2026-01-29 16:42:28 +0000 UTC" firstStartedPulling="2026-01-29 16:42:29.679085645 +0000 UTC m=+1232.587804927" lastFinishedPulling="2026-01-29 16:42:44.078080359 +0000 UTC m=+1246.986799631" observedRunningTime="2026-01-29 16:42:44.973764127 +0000 UTC m=+1247.882483409" watchObservedRunningTime="2026-01-29 16:42:44.9787837 +0000 UTC m=+1247.887502972" Jan 29 16:42:45 crc kubenswrapper[4886]: I0129 16:42:45.010748 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5" podStartSLOduration=2.673875029 podStartE2EDuration="17.010734304s" podCreationTimestamp="2026-01-29 16:42:28 +0000 UTC" firstStartedPulling="2026-01-29 16:42:29.739201867 +0000 UTC m=+1232.647921139" lastFinishedPulling="2026-01-29 16:42:44.076061142 +0000 UTC m=+1246.984780414" observedRunningTime="2026-01-29 16:42:45.009444258 +0000 UTC m=+1247.918163540" watchObservedRunningTime="2026-01-29 16:42:45.010734304 +0000 UTC m=+1247.919453576" Jan 29 16:42:45 crc kubenswrapper[4886]: I0129 16:42:45.036135 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" podStartSLOduration=1.909028948 podStartE2EDuration="16.036115493s" podCreationTimestamp="2026-01-29 16:42:29 +0000 UTC" firstStartedPulling="2026-01-29 16:42:29.943985665 +0000 UTC m=+1232.852704927" lastFinishedPulling="2026-01-29 16:42:44.0710722 +0000 UTC m=+1246.979791472" observedRunningTime="2026-01-29 16:42:45.034874408 +0000 UTC m=+1247.943593680" watchObservedRunningTime="2026-01-29 16:42:45.036115493 +0000 UTC m=+1247.944834765" Jan 29 16:42:45 crc kubenswrapper[4886]: I0129 16:42:45.050019 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-72k5z" podStartSLOduration=2.602313425 podStartE2EDuration="17.049994886s" podCreationTimestamp="2026-01-29 16:42:28 +0000 UTC" firstStartedPulling="2026-01-29 16:42:29.629085 +0000 UTC m=+1232.537804272" lastFinishedPulling="2026-01-29 16:42:44.076766471 +0000 UTC m=+1246.985485733" observedRunningTime="2026-01-29 16:42:45.047133755 +0000 UTC m=+1247.955853027" watchObservedRunningTime="2026-01-29 16:42:45.049994886 +0000 UTC m=+1247.958714158" Jan 29 16:42:49 crc kubenswrapper[4886]: I0129 16:42:49.681703 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-dtcpm" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.672652 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bqffj"] Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.673793 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.676081 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.676242 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.676313 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jlgkl" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.697747 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x5t7\" (UniqueName: \"kubernetes.io/projected/f883321e-6f99-4c0d-89ea-377fec9d166c-kube-api-access-9x5t7\") pod \"cert-manager-cainjector-cf98fcc89-bqffj\" (UID: \"f883321e-6f99-4c0d-89ea-377fec9d166c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.706383 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bqffj"] Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.733527 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-n8tt2"] Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.734377 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-n8tt2" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.736272 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sd87l"] Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.737352 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.737609 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fl6zk" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.743223 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-sqmqv" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.743385 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-n8tt2"] Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.746377 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sd87l"] Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.799029 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdsz\" (UniqueName: \"kubernetes.io/projected/a80a9fce-17df-45c6-b123-f3060469c1c9-kube-api-access-mpdsz\") pod \"cert-manager-webhook-687f57d79b-sd87l\" (UID: \"a80a9fce-17df-45c6-b123-f3060469c1c9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.799113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x5t7\" (UniqueName: \"kubernetes.io/projected/f883321e-6f99-4c0d-89ea-377fec9d166c-kube-api-access-9x5t7\") pod \"cert-manager-cainjector-cf98fcc89-bqffj\" (UID: \"f883321e-6f99-4c0d-89ea-377fec9d166c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.799142 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcbkh\" (UniqueName: \"kubernetes.io/projected/0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a-kube-api-access-lcbkh\") pod \"cert-manager-858654f9db-n8tt2\" (UID: \"0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a\") " pod="cert-manager/cert-manager-858654f9db-n8tt2" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.832370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x5t7\" (UniqueName: \"kubernetes.io/projected/f883321e-6f99-4c0d-89ea-377fec9d166c-kube-api-access-9x5t7\") pod \"cert-manager-cainjector-cf98fcc89-bqffj\" (UID: \"f883321e-6f99-4c0d-89ea-377fec9d166c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.900880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpdsz\" (UniqueName: \"kubernetes.io/projected/a80a9fce-17df-45c6-b123-f3060469c1c9-kube-api-access-mpdsz\") pod \"cert-manager-webhook-687f57d79b-sd87l\" (UID: \"a80a9fce-17df-45c6-b123-f3060469c1c9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.900995 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcbkh\" (UniqueName: \"kubernetes.io/projected/0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a-kube-api-access-lcbkh\") pod \"cert-manager-858654f9db-n8tt2\" (UID: \"0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a\") " pod="cert-manager/cert-manager-858654f9db-n8tt2" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.927062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcbkh\" (UniqueName: \"kubernetes.io/projected/0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a-kube-api-access-lcbkh\") pod \"cert-manager-858654f9db-n8tt2\" (UID: \"0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a\") " pod="cert-manager/cert-manager-858654f9db-n8tt2" Jan 29 16:42:53 crc kubenswrapper[4886]: I0129 16:42:53.929695 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpdsz\" (UniqueName: \"kubernetes.io/projected/a80a9fce-17df-45c6-b123-f3060469c1c9-kube-api-access-mpdsz\") pod \"cert-manager-webhook-687f57d79b-sd87l\" (UID: \"a80a9fce-17df-45c6-b123-f3060469c1c9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.009301 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.098265 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-n8tt2" Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.104154 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.599179 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bqffj"] Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.633690 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-n8tt2"] Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.752372 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sd87l"] Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.986290 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" event={"ID":"a80a9fce-17df-45c6-b123-f3060469c1c9","Type":"ContainerStarted","Data":"d76cc09d39fd1489a0a6731b4db02244e2b953b627c2a1da89d75a187a77d4fa"} Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.987221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-n8tt2" event={"ID":"0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a","Type":"ContainerStarted","Data":"6e6db538cd0773e16d22299a597c45cac1c79850c2689aceea23c8c1d44a2acb"} Jan 29 16:42:54 crc kubenswrapper[4886]: I0129 16:42:54.988119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" event={"ID":"f883321e-6f99-4c0d-89ea-377fec9d166c","Type":"ContainerStarted","Data":"aac746d82eefc1fab729f2d22b3db755db7701292ab01dff94bf3a35158d7548"} Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.027297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" event={"ID":"f883321e-6f99-4c0d-89ea-377fec9d166c","Type":"ContainerStarted","Data":"a4f3b16bd260748325fa52011e2e544b805ef52770eb12e956d54a4637e53c9c"} Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.028651 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" event={"ID":"a80a9fce-17df-45c6-b123-f3060469c1c9","Type":"ContainerStarted","Data":"4ae4f5de49ab8e404f36f0965082d3290ed77dedd5ab75141d3d59441c428d17"} Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.028852 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.029792 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-n8tt2" event={"ID":"0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a","Type":"ContainerStarted","Data":"aac75e2cb7cfa8356fdcb6a853568d59ab7efaf19a045c2f0d1b28d5aeac4a61"} Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.039938 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bqffj" podStartSLOduration=2.819381327 podStartE2EDuration="7.039916779s" podCreationTimestamp="2026-01-29 16:42:53 +0000 UTC" firstStartedPulling="2026-01-29 16:42:54.610755139 +0000 UTC m=+1257.519474401" lastFinishedPulling="2026-01-29 16:42:58.831290581 +0000 UTC m=+1261.740009853" observedRunningTime="2026-01-29 16:43:00.039166627 +0000 UTC m=+1262.947885899" watchObservedRunningTime="2026-01-29 16:43:00.039916779 +0000 UTC m=+1262.948636081" Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.055791 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-n8tt2" podStartSLOduration=2.8109913300000002 podStartE2EDuration="7.055769608s" podCreationTimestamp="2026-01-29 16:42:53 +0000 UTC" firstStartedPulling="2026-01-29 16:42:54.652903182 +0000 UTC m=+1257.561622454" lastFinishedPulling="2026-01-29 16:42:58.89768144 +0000 UTC m=+1261.806400732" observedRunningTime="2026-01-29 16:43:00.051466756 +0000 UTC m=+1262.960186088" watchObservedRunningTime="2026-01-29 16:43:00.055769608 +0000 UTC m=+1262.964488880" Jan 29 16:43:00 crc kubenswrapper[4886]: I0129 16:43:00.087949 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" podStartSLOduration=2.93782224 podStartE2EDuration="7.087928198s" podCreationTimestamp="2026-01-29 16:42:53 +0000 UTC" firstStartedPulling="2026-01-29 16:42:54.75595134 +0000 UTC m=+1257.664670612" lastFinishedPulling="2026-01-29 16:42:58.906057288 +0000 UTC m=+1261.814776570" observedRunningTime="2026-01-29 16:43:00.087215388 +0000 UTC m=+1262.995934670" watchObservedRunningTime="2026-01-29 16:43:00.087928198 +0000 UTC m=+1262.996647470" Jan 29 16:43:04 crc kubenswrapper[4886]: I0129 16:43:04.108355 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-sd87l" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.499224 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t"] Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.504480 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.506073 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t"] Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.516143 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.600110 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.600206 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.600237 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntc9w\" (UniqueName: \"kubernetes.io/projected/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-kube-api-access-ntc9w\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.701938 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.702115 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.702151 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntc9w\" (UniqueName: \"kubernetes.io/projected/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-kube-api-access-ntc9w\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.702425 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.702968 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.724934 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntc9w\" (UniqueName: \"kubernetes.io/projected/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-kube-api-access-ntc9w\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.825406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.864650 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n"] Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.866183 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.896754 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n"] Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.904688 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.904821 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsfs\" (UniqueName: \"kubernetes.io/projected/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-kube-api-access-rqsfs\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:31 crc kubenswrapper[4886]: I0129 16:43:31.904879 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.006430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsfs\" (UniqueName: \"kubernetes.io/projected/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-kube-api-access-rqsfs\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.006510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.006564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.007073 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.007318 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.033428 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsfs\" (UniqueName: \"kubernetes.io/projected/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-kube-api-access-rqsfs\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.064001 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t"] Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.196920 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.272451 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" event={"ID":"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a","Type":"ContainerStarted","Data":"3b3d7653af10af1be662575ec81d5964f016b37d552180b0fffc7f334ee3e715"} Jan 29 16:43:32 crc kubenswrapper[4886]: I0129 16:43:32.423072 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n"] Jan 29 16:43:32 crc kubenswrapper[4886]: W0129 16:43:32.425835 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb00b2947_6947_4d0a_b2d9_42adefd8ebb3.slice/crio-1d29ef1c12997096e36892fdf75d3f7775d972c0d8c2b7af17235ce3ab3f5ad1 WatchSource:0}: Error finding container 1d29ef1c12997096e36892fdf75d3f7775d972c0d8c2b7af17235ce3ab3f5ad1: Status 404 returned error can't find the container with id 1d29ef1c12997096e36892fdf75d3f7775d972c0d8c2b7af17235ce3ab3f5ad1 Jan 29 16:43:33 crc kubenswrapper[4886]: I0129 16:43:33.281280 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" event={"ID":"b00b2947-6947-4d0a-b2d9-42adefd8ebb3","Type":"ContainerStarted","Data":"1d29ef1c12997096e36892fdf75d3f7775d972c0d8c2b7af17235ce3ab3f5ad1"} Jan 29 16:43:34 crc kubenswrapper[4886]: I0129 16:43:34.291076 4886 generic.go:334] "Generic (PLEG): container finished" podID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerID="e4cccb4d486fe60f0edfb4f7f715ab8d92c12f9f9f4a1cfe4e00c4adc5c34b51" exitCode=0 Jan 29 16:43:34 crc kubenswrapper[4886]: I0129 16:43:34.291208 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" event={"ID":"b00b2947-6947-4d0a-b2d9-42adefd8ebb3","Type":"ContainerDied","Data":"e4cccb4d486fe60f0edfb4f7f715ab8d92c12f9f9f4a1cfe4e00c4adc5c34b51"} Jan 29 16:43:34 crc kubenswrapper[4886]: I0129 16:43:34.296184 4886 generic.go:334] "Generic (PLEG): container finished" podID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerID="a37b6266b19c1ce3a441dff00e8cafa9669109c4ad6f2385f4502687f4af460a" exitCode=0 Jan 29 16:43:34 crc kubenswrapper[4886]: I0129 16:43:34.296253 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" event={"ID":"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a","Type":"ContainerDied","Data":"a37b6266b19c1ce3a441dff00e8cafa9669109c4ad6f2385f4502687f4af460a"} Jan 29 16:43:38 crc kubenswrapper[4886]: I0129 16:43:38.338640 4886 generic.go:334] "Generic (PLEG): container finished" podID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerID="82c9ec7fc7823b99a453ab6558f3f2d190f9fc013e02e7613db77aca6c9d421f" exitCode=0 Jan 29 16:43:38 crc kubenswrapper[4886]: I0129 16:43:38.338751 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" event={"ID":"b00b2947-6947-4d0a-b2d9-42adefd8ebb3","Type":"ContainerDied","Data":"82c9ec7fc7823b99a453ab6558f3f2d190f9fc013e02e7613db77aca6c9d421f"} Jan 29 16:43:38 crc kubenswrapper[4886]: I0129 16:43:38.343236 4886 generic.go:334] "Generic (PLEG): container finished" podID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerID="0e60e37f19cf29954ac9598d39f3e907b0a8fd7df0f8e5321feafa568cea256e" exitCode=0 Jan 29 16:43:38 crc kubenswrapper[4886]: I0129 16:43:38.343297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" event={"ID":"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a","Type":"ContainerDied","Data":"0e60e37f19cf29954ac9598d39f3e907b0a8fd7df0f8e5321feafa568cea256e"} Jan 29 16:43:39 crc kubenswrapper[4886]: I0129 16:43:39.351752 4886 generic.go:334] "Generic (PLEG): container finished" podID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerID="c3183e31247098ddd97f7b27ad0dbf70d02daf691b6fbd6a4595181aba6a0ae9" exitCode=0 Jan 29 16:43:39 crc kubenswrapper[4886]: I0129 16:43:39.351844 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" event={"ID":"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a","Type":"ContainerDied","Data":"c3183e31247098ddd97f7b27ad0dbf70d02daf691b6fbd6a4595181aba6a0ae9"} Jan 29 16:43:39 crc kubenswrapper[4886]: I0129 16:43:39.354642 4886 generic.go:334] "Generic (PLEG): container finished" podID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerID="8d122cad021ce2744d255a9dc7ff90dfde7fd82fdce7705c91c1c86d943ebbab" exitCode=0 Jan 29 16:43:39 crc kubenswrapper[4886]: I0129 16:43:39.354682 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" event={"ID":"b00b2947-6947-4d0a-b2d9-42adefd8ebb3","Type":"ContainerDied","Data":"8d122cad021ce2744d255a9dc7ff90dfde7fd82fdce7705c91c1c86d943ebbab"} Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.643219 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.648142 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.837656 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntc9w\" (UniqueName: \"kubernetes.io/projected/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-kube-api-access-ntc9w\") pod \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.837757 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-util\") pod \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.837779 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-bundle\") pod \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.837808 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-bundle\") pod \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\" (UID: \"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a\") " Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.837826 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-util\") pod \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.837926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqsfs\" (UniqueName: \"kubernetes.io/projected/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-kube-api-access-rqsfs\") pod \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\" (UID: \"b00b2947-6947-4d0a-b2d9-42adefd8ebb3\") " Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.838581 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-bundle" (OuterVolumeSpecName: "bundle") pod "e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" (UID: "e6c5874b-97c3-4f3e-8e88-68c3653a6c4a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.838595 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-bundle" (OuterVolumeSpecName: "bundle") pod "b00b2947-6947-4d0a-b2d9-42adefd8ebb3" (UID: "b00b2947-6947-4d0a-b2d9-42adefd8ebb3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.844631 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-kube-api-access-ntc9w" (OuterVolumeSpecName: "kube-api-access-ntc9w") pod "e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" (UID: "e6c5874b-97c3-4f3e-8e88-68c3653a6c4a"). InnerVolumeSpecName "kube-api-access-ntc9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.845560 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-kube-api-access-rqsfs" (OuterVolumeSpecName: "kube-api-access-rqsfs") pod "b00b2947-6947-4d0a-b2d9-42adefd8ebb3" (UID: "b00b2947-6947-4d0a-b2d9-42adefd8ebb3"). InnerVolumeSpecName "kube-api-access-rqsfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.847910 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-util" (OuterVolumeSpecName: "util") pod "e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" (UID: "e6c5874b-97c3-4f3e-8e88-68c3653a6c4a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.862062 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-util" (OuterVolumeSpecName: "util") pod "b00b2947-6947-4d0a-b2d9-42adefd8ebb3" (UID: "b00b2947-6947-4d0a-b2d9-42adefd8ebb3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.939861 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqsfs\" (UniqueName: \"kubernetes.io/projected/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-kube-api-access-rqsfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.939904 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntc9w\" (UniqueName: \"kubernetes.io/projected/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-kube-api-access-ntc9w\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.939916 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.939928 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.939941 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:40 crc kubenswrapper[4886]: I0129 16:43:40.939951 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b00b2947-6947-4d0a-b2d9-42adefd8ebb3-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:41 crc kubenswrapper[4886]: I0129 16:43:41.369029 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" event={"ID":"b00b2947-6947-4d0a-b2d9-42adefd8ebb3","Type":"ContainerDied","Data":"1d29ef1c12997096e36892fdf75d3f7775d972c0d8c2b7af17235ce3ab3f5ad1"} Jan 29 16:43:41 crc kubenswrapper[4886]: I0129 16:43:41.369066 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d29ef1c12997096e36892fdf75d3f7775d972c0d8c2b7af17235ce3ab3f5ad1" Jan 29 16:43:41 crc kubenswrapper[4886]: I0129 16:43:41.369277 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n" Jan 29 16:43:41 crc kubenswrapper[4886]: I0129 16:43:41.371407 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" event={"ID":"e6c5874b-97c3-4f3e-8e88-68c3653a6c4a","Type":"ContainerDied","Data":"3b3d7653af10af1be662575ec81d5964f016b37d552180b0fffc7f334ee3e715"} Jan 29 16:43:41 crc kubenswrapper[4886]: I0129 16:43:41.371431 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3d7653af10af1be662575ec81d5964f016b37d552180b0fffc7f334ee3e715" Jan 29 16:43:41 crc kubenswrapper[4886]: I0129 16:43:41.371514 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.282812 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw"] Jan 29 16:43:48 crc kubenswrapper[4886]: E0129 16:43:48.283683 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="extract" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283699 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="extract" Jan 29 16:43:48 crc kubenswrapper[4886]: E0129 16:43:48.283721 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="extract" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283729 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="extract" Jan 29 16:43:48 crc kubenswrapper[4886]: E0129 16:43:48.283743 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="util" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283751 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="util" Jan 29 16:43:48 crc kubenswrapper[4886]: E0129 16:43:48.283765 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="pull" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283773 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="pull" Jan 29 16:43:48 crc kubenswrapper[4886]: E0129 16:43:48.283790 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="pull" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283798 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="pull" Jan 29 16:43:48 crc kubenswrapper[4886]: E0129 16:43:48.283807 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="util" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283814 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="util" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283959 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" containerName="extract" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.283977 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" containerName="extract" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.284843 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.287769 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.288058 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.288590 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.288808 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.289168 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.289535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-tvwsb" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.307748 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw"] Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.449548 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-manager-config\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.449883 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5vt\" (UniqueName: \"kubernetes.io/projected/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-kube-api-access-vz5vt\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.449923 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-webhook-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.449955 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.449993 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-apiservice-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.551111 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-webhook-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.551186 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.551245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-apiservice-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.551289 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-manager-config\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.551342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5vt\" (UniqueName: \"kubernetes.io/projected/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-kube-api-access-vz5vt\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.552578 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-manager-config\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.557195 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-apiservice-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.557727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-webhook-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.562475 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.567049 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5vt\" (UniqueName: \"kubernetes.io/projected/994fe9e1-7adf-4aab-bc9e-d51fd52286a9-kube-api-access-vz5vt\") pod \"loki-operator-controller-manager-5b44bcdc44-bgqfw\" (UID: \"994fe9e1-7adf-4aab-bc9e-d51fd52286a9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.607893 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:43:48 crc kubenswrapper[4886]: I0129 16:43:48.854431 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw"] Jan 29 16:43:49 crc kubenswrapper[4886]: I0129 16:43:49.421889 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" event={"ID":"994fe9e1-7adf-4aab-bc9e-d51fd52286a9","Type":"ContainerStarted","Data":"ba9a5d85b3ffbc6869ac3918f6bc131600276658ec5cc190c42bbcfd7659bf26"} Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.198200 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt"] Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.200776 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.204661 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.204994 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-vvmn2" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.205194 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.211604 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt"] Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.312262 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn9zg\" (UniqueName: \"kubernetes.io/projected/7f5851a1-d10c-445d-bffc-12a6acc01ead-kube-api-access-hn9zg\") pod \"cluster-logging-operator-79cf69ddc8-hgdlt\" (UID: \"7f5851a1-d10c-445d-bffc-12a6acc01ead\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.413193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn9zg\" (UniqueName: \"kubernetes.io/projected/7f5851a1-d10c-445d-bffc-12a6acc01ead-kube-api-access-hn9zg\") pod \"cluster-logging-operator-79cf69ddc8-hgdlt\" (UID: \"7f5851a1-d10c-445d-bffc-12a6acc01ead\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.440399 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn9zg\" (UniqueName: \"kubernetes.io/projected/7f5851a1-d10c-445d-bffc-12a6acc01ead-kube-api-access-hn9zg\") pod \"cluster-logging-operator-79cf69ddc8-hgdlt\" (UID: \"7f5851a1-d10c-445d-bffc-12a6acc01ead\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" Jan 29 16:43:52 crc kubenswrapper[4886]: I0129 16:43:52.523690 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" Jan 29 16:43:54 crc kubenswrapper[4886]: I0129 16:43:54.522249 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt"] Jan 29 16:43:55 crc kubenswrapper[4886]: I0129 16:43:55.473475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" event={"ID":"7f5851a1-d10c-445d-bffc-12a6acc01ead","Type":"ContainerStarted","Data":"1c863aaa14c7b6806e471be9f33b5f8232c61b26550f24b07513ac5c9bbb6931"} Jan 29 16:43:55 crc kubenswrapper[4886]: I0129 16:43:55.475438 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" event={"ID":"994fe9e1-7adf-4aab-bc9e-d51fd52286a9","Type":"ContainerStarted","Data":"68813b1abb27e77fc3f9ffa2e46de8cc5d9ca9355ad6ca0972ac29165f1bba50"} Jan 29 16:43:59 crc kubenswrapper[4886]: I0129 16:43:59.661221 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:43:59 crc kubenswrapper[4886]: I0129 16:43:59.661825 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:04 crc kubenswrapper[4886]: I0129 16:44:04.543204 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" event={"ID":"994fe9e1-7adf-4aab-bc9e-d51fd52286a9","Type":"ContainerStarted","Data":"6a40817d5e711fbe7de63ecd5931053ea427448bc64bccc055e04dc1036c0cc1"} Jan 29 16:44:04 crc kubenswrapper[4886]: I0129 16:44:04.544786 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:44:04 crc kubenswrapper[4886]: I0129 16:44:04.546079 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" Jan 29 16:44:04 crc kubenswrapper[4886]: I0129 16:44:04.546115 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" event={"ID":"7f5851a1-d10c-445d-bffc-12a6acc01ead","Type":"ContainerStarted","Data":"ba2cf50913b27ad205a4b605a3888e5d49d8cb1cbb8b48fb51fc3234dabf665e"} Jan 29 16:44:04 crc kubenswrapper[4886]: I0129 16:44:04.566374 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5b44bcdc44-bgqfw" podStartSLOduration=1.154885225 podStartE2EDuration="16.566355739s" podCreationTimestamp="2026-01-29 16:43:48 +0000 UTC" firstStartedPulling="2026-01-29 16:43:48.867139496 +0000 UTC m=+1311.775858758" lastFinishedPulling="2026-01-29 16:44:04.27860999 +0000 UTC m=+1327.187329272" observedRunningTime="2026-01-29 16:44:04.563054077 +0000 UTC m=+1327.471773369" watchObservedRunningTime="2026-01-29 16:44:04.566355739 +0000 UTC m=+1327.475075021" Jan 29 16:44:04 crc kubenswrapper[4886]: I0129 16:44:04.610837 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hgdlt" podStartSLOduration=2.956335112 podStartE2EDuration="12.610815807s" podCreationTimestamp="2026-01-29 16:43:52 +0000 UTC" firstStartedPulling="2026-01-29 16:43:54.537953174 +0000 UTC m=+1317.446672446" lastFinishedPulling="2026-01-29 16:44:04.192433869 +0000 UTC m=+1327.101153141" observedRunningTime="2026-01-29 16:44:04.607603728 +0000 UTC m=+1327.516323020" watchObservedRunningTime="2026-01-29 16:44:04.610815807 +0000 UTC m=+1327.519535099" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.576278 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.577409 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.579544 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.579762 4886 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-slt87" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.587841 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.595029 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.609176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvp5\" (UniqueName: \"kubernetes.io/projected/ca730b03-66b8-4129-8cf2-2661a1baae99-kube-api-access-lnvp5\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") " pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.609463 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f92a964-ec52-44d0-bd50-9ea187253084\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f92a964-ec52-44d0-bd50-9ea187253084\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") " pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.710832 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f92a964-ec52-44d0-bd50-9ea187253084\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f92a964-ec52-44d0-bd50-9ea187253084\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") " pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.711303 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnvp5\" (UniqueName: \"kubernetes.io/projected/ca730b03-66b8-4129-8cf2-2661a1baae99-kube-api-access-lnvp5\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") " pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.714125 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.714177 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f92a964-ec52-44d0-bd50-9ea187253084\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f92a964-ec52-44d0-bd50-9ea187253084\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac43d28ac76fe5a6ff50043777e896a20c8968497a85466ecb9b263eeca3a165/globalmount\"" pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.742464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f92a964-ec52-44d0-bd50-9ea187253084\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f92a964-ec52-44d0-bd50-9ea187253084\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") " pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.744403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnvp5\" (UniqueName: \"kubernetes.io/projected/ca730b03-66b8-4129-8cf2-2661a1baae99-kube-api-access-lnvp5\") pod \"minio\" (UID: \"ca730b03-66b8-4129-8cf2-2661a1baae99\") " pod="minio-dev/minio" Jan 29 16:44:08 crc kubenswrapper[4886]: I0129 16:44:08.900871 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 29 16:44:09 crc kubenswrapper[4886]: I0129 16:44:09.338444 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 29 16:44:09 crc kubenswrapper[4886]: I0129 16:44:09.577267 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ca730b03-66b8-4129-8cf2-2661a1baae99","Type":"ContainerStarted","Data":"6ce2a9575576a20f51b733648f7267cb6d0a573c22532b8639b0ec3216ff2215"} Jan 29 16:44:14 crc kubenswrapper[4886]: I0129 16:44:14.608195 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ca730b03-66b8-4129-8cf2-2661a1baae99","Type":"ContainerStarted","Data":"5d8c34c88ba4581d9cf41116cfb3af3c8eb7e4ce38737bd3eb408f90a4d7443e"} Jan 29 16:44:14 crc kubenswrapper[4886]: I0129 16:44:14.626578 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.881121467 podStartE2EDuration="8.62655469s" podCreationTimestamp="2026-01-29 16:44:06 +0000 UTC" firstStartedPulling="2026-01-29 16:44:09.356035145 +0000 UTC m=+1332.264754417" lastFinishedPulling="2026-01-29 16:44:14.101468358 +0000 UTC m=+1337.010187640" observedRunningTime="2026-01-29 16:44:14.623384531 +0000 UTC m=+1337.532103823" watchObservedRunningTime="2026-01-29 16:44:14.62655469 +0000 UTC m=+1337.535273962" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.178712 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.181070 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.183830 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.184057 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.186591 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.186856 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.187169 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-jcxps" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.194713 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.289343 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.289506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-config\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.289634 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mcqr\" (UniqueName: \"kubernetes.io/projected/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-kube-api-access-7mcqr\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.289700 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.289835 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.346300 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-85zgx"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.347264 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.351202 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.351711 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.351811 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.361936 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-85zgx"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.390914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.391343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.391381 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-config\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.391416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mcqr\" (UniqueName: \"kubernetes.io/projected/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-kube-api-access-7mcqr\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.391445 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.392654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.392916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-config\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.396693 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.399911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.421277 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mcqr\" (UniqueName: \"kubernetes.io/projected/befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1-kube-api-access-7mcqr\") pod \"logging-loki-distributor-5f678c8dd6-2jzzb\" (UID: \"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.439672 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.440779 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.448492 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.448693 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.455881 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.493178 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.493237 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.493307 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-s3\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.493347 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khtrb\" (UniqueName: \"kubernetes.io/projected/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-kube-api-access-khtrb\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.493390 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-config\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.493406 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.510486 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.558193 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-8587c9555d-m4k69"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.559286 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.564085 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-n4kj5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.564356 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.564613 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.564830 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.565655 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.567929 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.576401 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-8587c9555d-cszl5"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.577887 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.585836 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8587c9555d-m4k69"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596697 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khtrb\" (UniqueName: \"kubernetes.io/projected/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-kube-api-access-khtrb\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596755 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-config\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596830 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596924 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596955 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.596978 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7sk\" (UniqueName: \"kubernetes.io/projected/fa3af54b-5759-4b53-a998-720bd2ff4608-kube-api-access-kz7sk\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.597006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.597037 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3af54b-5759-4b53-a998-720bd2ff4608-config\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.597065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-s3\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.600495 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8587c9555d-cszl5"] Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.602390 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-config\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.602740 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.609958 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.611723 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.621826 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-logging-loki-s3\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.652406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtrb\" (UniqueName: \"kubernetes.io/projected/fb80c257-3e6a-45c8-bb6f-6fb2676ef296-kube-api-access-khtrb\") pod \"logging-loki-querier-76788598db-85zgx\" (UID: \"fb80c257-3e6a-45c8-bb6f-6fb2676ef296\") " pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.669693 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.707336 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxgf\" (UniqueName: \"kubernetes.io/projected/046307bd-2e5e-4d92-b934-57ed8882d1bc-kube-api-access-wgxgf\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.707396 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.707422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.707445 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.707858 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-tenants\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.707978 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-tls-secret\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708054 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tgmn\" (UniqueName: \"kubernetes.io/projected/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-kube-api-access-8tgmn\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708365 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-tenants\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708419 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-tls-secret\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-rbac\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708493 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708554 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-lokistack-gateway\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708584 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708613 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708673 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7sk\" (UniqueName: \"kubernetes.io/projected/fa3af54b-5759-4b53-a998-720bd2ff4608-kube-api-access-kz7sk\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708768 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-lokistack-gateway\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708791 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-rbac\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.708838 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3af54b-5759-4b53-a998-720bd2ff4608-config\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.710439 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3af54b-5759-4b53-a998-720bd2ff4608-config\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.714065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.727464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.727776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fa3af54b-5759-4b53-a998-720bd2ff4608-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.735304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7sk\" (UniqueName: \"kubernetes.io/projected/fa3af54b-5759-4b53-a998-720bd2ff4608-kube-api-access-kz7sk\") pod \"logging-loki-query-frontend-69d9546745-9q2lr\" (UID: \"fa3af54b-5759-4b53-a998-720bd2ff4608\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.782213 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-lokistack-gateway\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-rbac\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810096 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgxgf\" (UniqueName: \"kubernetes.io/projected/046307bd-2e5e-4d92-b934-57ed8882d1bc-kube-api-access-wgxgf\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810120 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810155 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-tenants\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-tls-secret\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tgmn\" (UniqueName: \"kubernetes.io/projected/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-kube-api-access-8tgmn\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-tenants\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810234 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-tls-secret\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-rbac\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810272 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810291 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-lokistack-gateway\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810310 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.810343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.811093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.812874 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-rbac\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.813839 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-rbac\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.816492 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.816555 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-tenants\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.816656 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-lokistack-gateway\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.816756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.817381 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.821293 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-lokistack-gateway\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.825526 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-tls-secret\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.825698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.825737 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-tenants\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.835658 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-tls-secret\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.835805 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/046307bd-2e5e-4d92-b934-57ed8882d1bc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.841926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgxgf\" (UniqueName: \"kubernetes.io/projected/046307bd-2e5e-4d92-b934-57ed8882d1bc-kube-api-access-wgxgf\") pod \"logging-loki-gateway-8587c9555d-m4k69\" (UID: \"046307bd-2e5e-4d92-b934-57ed8882d1bc\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.844042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tgmn\" (UniqueName: \"kubernetes.io/projected/c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b-kube-api-access-8tgmn\") pod \"logging-loki-gateway-8587c9555d-cszl5\" (UID: \"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b\") " pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.921838 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:20 crc kubenswrapper[4886]: I0129 16:44:20.970958 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:20.999440 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb"] Jan 29 16:44:21 crc kubenswrapper[4886]: W0129 16:44:21.007079 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefd63fe_2ae3_4bb3_86fd_ac5486d7fbd1.slice/crio-8af0157bba70a29b5bc7d3c507bd6e596e9c97f47fb5e2fb053c7978b2ddd013 WatchSource:0}: Error finding container 8af0157bba70a29b5bc7d3c507bd6e596e9c97f47fb5e2fb053c7978b2ddd013: Status 404 returned error can't find the container with id 8af0157bba70a29b5bc7d3c507bd6e596e9c97f47fb5e2fb053c7978b2ddd013 Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.193020 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-85zgx"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.324556 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.337191 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8587c9555d-m4k69"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.341575 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.342561 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.345089 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.345292 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.348312 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.478026 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.479081 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.481446 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.482149 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.483257 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.505163 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.505972 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.511315 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.511985 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.523933 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524548 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524650 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22mlm\" (UniqueName: \"kubernetes.io/projected/0dd1a523-96c1-4311-9452-92e6da8a7e9b-kube-api-access-22mlm\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524710 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524733 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1a523-96c1-4311-9452-92e6da8a7e9b-config\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524784 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.524816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.568139 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8587c9555d-cszl5"] Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.626739 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-config\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.626800 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.626824 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.626858 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.626882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627038 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627121 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627177 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22mlm\" (UniqueName: \"kubernetes.io/projected/0dd1a523-96c1-4311-9452-92e6da8a7e9b-kube-api-access-22mlm\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627288 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627353 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627396 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627452 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627559 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627647 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qv5\" (UniqueName: \"kubernetes.io/projected/6059a5a7-5b65-481d-9b0f-f40d863e8310-kube-api-access-g7qv5\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627708 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4c66\" (UniqueName: \"kubernetes.io/projected/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-kube-api-access-j4c66\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6059a5a7-5b65-481d-9b0f-f40d863e8310-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627791 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627848 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1a523-96c1-4311-9452-92e6da8a7e9b-config\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627888 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.627916 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.631036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.631654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1a523-96c1-4311-9452-92e6da8a7e9b-config\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.638139 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.638185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.638399 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0dd1a523-96c1-4311-9452-92e6da8a7e9b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.640242 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.640360 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dfefc5fccf628ea15fdfe7921099c01b4bb138ba6509b8a6c369076c47177cbf/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.644703 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.647709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22mlm\" (UniqueName: \"kubernetes.io/projected/0dd1a523-96c1-4311-9452-92e6da8a7e9b-kube-api-access-22mlm\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.649464 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b3978f589a62496479536024629daab06f7d4d39c9730314cdaa09e60ca86e3/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.665729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" event={"ID":"fa3af54b-5759-4b53-a998-720bd2ff4608","Type":"ContainerStarted","Data":"bb95c23a9849ae26eaf9b7e2223192a8b25a7a63dfa2b7aef6d6de9edbdb7474"} Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.666715 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" event={"ID":"046307bd-2e5e-4d92-b934-57ed8882d1bc","Type":"ContainerStarted","Data":"c0f31a01bd117232cb2946e2cf38a8076865b0b71d4b859457f95dc2897a3304"} Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.667837 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" event={"ID":"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b","Type":"ContainerStarted","Data":"34f0563aa3055d8c201e2d035e1a56d218e891682f5f9c02b4dae8c4441563f6"} Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.670103 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" event={"ID":"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1","Type":"ContainerStarted","Data":"8af0157bba70a29b5bc7d3c507bd6e596e9c97f47fb5e2fb053c7978b2ddd013"} Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.671319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" event={"ID":"fb80c257-3e6a-45c8-bb6f-6fb2676ef296","Type":"ContainerStarted","Data":"77ca3f196adfbd0d8677f15cf0e5d57bd3c0a4db27bf1aa94440340f876353f4"} Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.672160 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdac16a-ee35-40f8-903b-eb0d0da233ab\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.679431 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1af84dd6-0683-4c8c-b3e8-62d9ada051fe\") pod \"logging-loki-ingester-0\" (UID: \"0dd1a523-96c1-4311-9452-92e6da8a7e9b\") " pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729777 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729844 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qv5\" (UniqueName: \"kubernetes.io/projected/6059a5a7-5b65-481d-9b0f-f40d863e8310-kube-api-access-g7qv5\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4c66\" (UniqueName: \"kubernetes.io/projected/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-kube-api-access-j4c66\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729884 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6059a5a7-5b65-481d-9b0f-f40d863e8310-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729917 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.729975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-config\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730007 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730026 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730042 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730066 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730085 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730109 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730132 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.730953 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.731978 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.731996 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-config\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.732363 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6059a5a7-5b65-481d-9b0f-f40d863e8310-config\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.733429 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.733442 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.733471 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3dacbae4f7a33d2ee419c7c6a0927f4eea7710cf7144d70769ad46cc2dc1508a/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.733483 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3909a1c7239f5f2fe75c2ea0c916f7ceb2a5b008d6c55126ad2d53193ffa5c3c/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.734260 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.734360 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.734674 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.734895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/6059a5a7-5b65-481d-9b0f-f40d863e8310-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.735940 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.744798 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.747895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4c66\" (UniqueName: \"kubernetes.io/projected/37c313cd-31f0-4fb3-9241-a3a59b1f55a6-kube-api-access-j4c66\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.755057 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qv5\" (UniqueName: \"kubernetes.io/projected/6059a5a7-5b65-481d-9b0f-f40d863e8310-kube-api-access-g7qv5\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.759697 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c763bef-e323-4b7f-ab21-3f0f2ab7b02d\") pod \"logging-loki-index-gateway-0\" (UID: \"6059a5a7-5b65-481d-9b0f-f40d863e8310\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.775685 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f5f1690-e741-43ac-b894-10ed3cbabe48\") pod \"logging-loki-compactor-0\" (UID: \"37c313cd-31f0-4fb3-9241-a3a59b1f55a6\") " pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.806035 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.824581 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:21 crc kubenswrapper[4886]: I0129 16:44:21.969971 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:22 crc kubenswrapper[4886]: I0129 16:44:22.244880 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 29 16:44:22 crc kubenswrapper[4886]: W0129 16:44:22.245719 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37c313cd_31f0_4fb3_9241_a3a59b1f55a6.slice/crio-a551dc8be456869ea1e1222b18e854616a4c7e4dace41621eb275eb60b96cd55 WatchSource:0}: Error finding container a551dc8be456869ea1e1222b18e854616a4c7e4dace41621eb275eb60b96cd55: Status 404 returned error can't find the container with id a551dc8be456869ea1e1222b18e854616a4c7e4dace41621eb275eb60b96cd55 Jan 29 16:44:22 crc kubenswrapper[4886]: I0129 16:44:22.307602 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 29 16:44:22 crc kubenswrapper[4886]: W0129 16:44:22.315237 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6059a5a7_5b65_481d_9b0f_f40d863e8310.slice/crio-b45347c54ba64555976f02d8b2d5db19c6794894f5b3f77c3da6f026a87848ab WatchSource:0}: Error finding container b45347c54ba64555976f02d8b2d5db19c6794894f5b3f77c3da6f026a87848ab: Status 404 returned error can't find the container with id b45347c54ba64555976f02d8b2d5db19c6794894f5b3f77c3da6f026a87848ab Jan 29 16:44:22 crc kubenswrapper[4886]: I0129 16:44:22.416426 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 29 16:44:22 crc kubenswrapper[4886]: I0129 16:44:22.678747 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"37c313cd-31f0-4fb3-9241-a3a59b1f55a6","Type":"ContainerStarted","Data":"a551dc8be456869ea1e1222b18e854616a4c7e4dace41621eb275eb60b96cd55"} Jan 29 16:44:22 crc kubenswrapper[4886]: I0129 16:44:22.680798 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"0dd1a523-96c1-4311-9452-92e6da8a7e9b","Type":"ContainerStarted","Data":"8e80501c245fd584742ca0aeeba230a29fbeddca37fb0c1cb655df5f1e1f2e3d"} Jan 29 16:44:22 crc kubenswrapper[4886]: I0129 16:44:22.681859 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"6059a5a7-5b65-481d-9b0f-f40d863e8310","Type":"ContainerStarted","Data":"b45347c54ba64555976f02d8b2d5db19c6794894f5b3f77c3da6f026a87848ab"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.713855 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" event={"ID":"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b","Type":"ContainerStarted","Data":"11555333d970cf0b5c68a36387a912e24adea362f7935b44573ae3fd14f4ac21"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.715764 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"0dd1a523-96c1-4311-9452-92e6da8a7e9b","Type":"ContainerStarted","Data":"7686228f02477b7ff31b7e28ac5f0c82132ef45f9b6f7fba4b4633855e191242"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.715818 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.717572 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" event={"ID":"befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1","Type":"ContainerStarted","Data":"804e2daa82e34c76d8b1f2bedac109c5769096ed12aa8dd35163911432df9432"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.717709 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.719132 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" event={"ID":"fb80c257-3e6a-45c8-bb6f-6fb2676ef296","Type":"ContainerStarted","Data":"afeb486c3647cf154609c0757d87fc078c0f9cec0dafdc955d849b9054c655ef"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.719248 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.720238 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"6059a5a7-5b65-481d-9b0f-f40d863e8310","Type":"ContainerStarted","Data":"b046bed7cdcbf761144683f50ae015d81d5c196ab45a554254960d666e3ae48e"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.720375 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.721430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" event={"ID":"046307bd-2e5e-4d92-b934-57ed8882d1bc","Type":"ContainerStarted","Data":"b58f31f74619068ed2a987c2c19a9c7c9d04c3ee32ad011a41acca1d9ae2c126"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.722562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" event={"ID":"fa3af54b-5759-4b53-a998-720bd2ff4608","Type":"ContainerStarted","Data":"7492769110a81fcaf0a6c529adb508a56b0abd143bd844812b0cb5eb702882ff"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.723195 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.725448 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"37c313cd-31f0-4fb3-9241-a3a59b1f55a6","Type":"ContainerStarted","Data":"d5e326b3ffa182ca9ac8c50df1d959306c0b45b8ac2b6b70cdbc1d9e40f63b3d"} Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.725590 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.749692 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.033628762 podStartE2EDuration="5.749676536s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:22.440501216 +0000 UTC m=+1345.349220488" lastFinishedPulling="2026-01-29 16:44:25.15654899 +0000 UTC m=+1348.065268262" observedRunningTime="2026-01-29 16:44:25.739575455 +0000 UTC m=+1348.648294727" watchObservedRunningTime="2026-01-29 16:44:25.749676536 +0000 UTC m=+1348.658395808" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.765276 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.92225166 podStartE2EDuration="5.765257931s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:22.317651043 +0000 UTC m=+1345.226370315" lastFinishedPulling="2026-01-29 16:44:25.160657314 +0000 UTC m=+1348.069376586" observedRunningTime="2026-01-29 16:44:25.757159365 +0000 UTC m=+1348.665878637" watchObservedRunningTime="2026-01-29 16:44:25.765257931 +0000 UTC m=+1348.673977203" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.781955 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" podStartSLOduration=1.9495604960000001 podStartE2EDuration="5.781931675s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:21.332857473 +0000 UTC m=+1344.241576745" lastFinishedPulling="2026-01-29 16:44:25.165228652 +0000 UTC m=+1348.073947924" observedRunningTime="2026-01-29 16:44:25.779480217 +0000 UTC m=+1348.688199509" watchObservedRunningTime="2026-01-29 16:44:25.781931675 +0000 UTC m=+1348.690650947" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.821809 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.9610464690000002 podStartE2EDuration="5.821781285s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:22.250123731 +0000 UTC m=+1345.158843013" lastFinishedPulling="2026-01-29 16:44:25.110858557 +0000 UTC m=+1348.019577829" observedRunningTime="2026-01-29 16:44:25.797826438 +0000 UTC m=+1348.706545710" watchObservedRunningTime="2026-01-29 16:44:25.821781285 +0000 UTC m=+1348.730500567" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.836001 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" podStartSLOduration=1.6677709040000002 podStartE2EDuration="5.835980571s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:21.016021154 +0000 UTC m=+1343.924740426" lastFinishedPulling="2026-01-29 16:44:25.184230821 +0000 UTC m=+1348.092950093" observedRunningTime="2026-01-29 16:44:25.825918531 +0000 UTC m=+1348.734637803" watchObservedRunningTime="2026-01-29 16:44:25.835980571 +0000 UTC m=+1348.744699853" Jan 29 16:44:25 crc kubenswrapper[4886]: I0129 16:44:25.845345 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" podStartSLOduration=1.8790429309999999 podStartE2EDuration="5.845307351s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:21.22294511 +0000 UTC m=+1344.131664382" lastFinishedPulling="2026-01-29 16:44:25.18920953 +0000 UTC m=+1348.097928802" observedRunningTime="2026-01-29 16:44:25.842442591 +0000 UTC m=+1348.751161883" watchObservedRunningTime="2026-01-29 16:44:25.845307351 +0000 UTC m=+1348.754026623" Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.661383 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.661652 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.759249 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" event={"ID":"c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b","Type":"ContainerStarted","Data":"a59042ea205bed00605dd73fa40dc9f973e22640b33de559f4f2879ed5df1cda"} Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.760166 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.760431 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.778835 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.780642 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" Jan 29 16:44:29 crc kubenswrapper[4886]: I0129 16:44:29.795592 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-8587c9555d-cszl5" podStartSLOduration=2.002796131 podStartE2EDuration="9.795572806s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:21.575021451 +0000 UTC m=+1344.483740723" lastFinishedPulling="2026-01-29 16:44:29.367798126 +0000 UTC m=+1352.276517398" observedRunningTime="2026-01-29 16:44:29.784278152 +0000 UTC m=+1352.692997424" watchObservedRunningTime="2026-01-29 16:44:29.795572806 +0000 UTC m=+1352.704292088" Jan 29 16:44:36 crc kubenswrapper[4886]: I0129 16:44:36.822947 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" event={"ID":"046307bd-2e5e-4d92-b934-57ed8882d1bc","Type":"ContainerStarted","Data":"0623f2702768ecd82ed023b2c9c84d2e0d51b9b0e6841d9171ff5498cf034bc7"} Jan 29 16:44:36 crc kubenswrapper[4886]: I0129 16:44:36.823395 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:36 crc kubenswrapper[4886]: I0129 16:44:36.837549 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:36 crc kubenswrapper[4886]: I0129 16:44:36.853779 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" podStartSLOduration=2.395511883 podStartE2EDuration="16.853751375s" podCreationTimestamp="2026-01-29 16:44:20 +0000 UTC" firstStartedPulling="2026-01-29 16:44:21.337396419 +0000 UTC m=+1344.246115691" lastFinishedPulling="2026-01-29 16:44:35.795635881 +0000 UTC m=+1358.704355183" observedRunningTime="2026-01-29 16:44:36.847170642 +0000 UTC m=+1359.755889934" watchObservedRunningTime="2026-01-29 16:44:36.853751375 +0000 UTC m=+1359.762470667" Jan 29 16:44:37 crc kubenswrapper[4886]: I0129 16:44:37.831372 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:37 crc kubenswrapper[4886]: I0129 16:44:37.847675 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8587c9555d-m4k69" Jan 29 16:44:40 crc kubenswrapper[4886]: I0129 16:44:40.520844 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" Jan 29 16:44:40 crc kubenswrapper[4886]: I0129 16:44:40.678693 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-85zgx" Jan 29 16:44:40 crc kubenswrapper[4886]: I0129 16:44:40.788291 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-9q2lr" Jan 29 16:44:41 crc kubenswrapper[4886]: I0129 16:44:41.815890 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 29 16:44:41 crc kubenswrapper[4886]: I0129 16:44:41.831080 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 29 16:44:41 crc kubenswrapper[4886]: I0129 16:44:41.977530 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 29 16:44:41 crc kubenswrapper[4886]: I0129 16:44:41.977592 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0dd1a523-96c1-4311-9452-92e6da8a7e9b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 16:44:51 crc kubenswrapper[4886]: I0129 16:44:51.978526 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 29 16:44:51 crc kubenswrapper[4886]: I0129 16:44:51.979092 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0dd1a523-96c1-4311-9452-92e6da8a7e9b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 16:44:59 crc kubenswrapper[4886]: I0129 16:44:59.660697 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:44:59 crc kubenswrapper[4886]: I0129 16:44:59.661067 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:59 crc kubenswrapper[4886]: I0129 16:44:59.661119 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:44:59 crc kubenswrapper[4886]: I0129 16:44:59.661852 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e07342110c4b02787cb4723c63fa377397be4b574d1be34193ab1f7b4cebac54"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:44:59 crc kubenswrapper[4886]: I0129 16:44:59.661918 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://e07342110c4b02787cb4723c63fa377397be4b574d1be34193ab1f7b4cebac54" gracePeriod=600 Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.024191 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="e07342110c4b02787cb4723c63fa377397be4b574d1be34193ab1f7b4cebac54" exitCode=0 Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.024250 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"e07342110c4b02787cb4723c63fa377397be4b574d1be34193ab1f7b4cebac54"} Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.024296 4886 scope.go:117] "RemoveContainer" containerID="84a645b31233e6f6691e7af3a8d18c33f1db7629388f3007d7e51e43f9f65e97" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.157639 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr"] Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.160370 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.162904 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.163760 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.166849 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr"] Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.258213 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-config-volume\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.258268 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfdz7\" (UniqueName: \"kubernetes.io/projected/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-kube-api-access-dfdz7\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.258468 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-secret-volume\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.360105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-config-volume\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.360151 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfdz7\" (UniqueName: \"kubernetes.io/projected/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-kube-api-access-dfdz7\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.360215 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-secret-volume\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.362133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-config-volume\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.368097 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-secret-volume\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.387664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfdz7\" (UniqueName: \"kubernetes.io/projected/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-kube-api-access-dfdz7\") pod \"collect-profiles-29495085-rzdqr\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.493907 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:00 crc kubenswrapper[4886]: I0129 16:45:00.955453 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr"] Jan 29 16:45:00 crc kubenswrapper[4886]: W0129 16:45:00.968795 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a04871a_41ba_40fc_bfb0_ca8f308e9b01.slice/crio-07fb4c9195f3111e975be0d4d67ac8f418ab546897a410bb0eb6ff30585cce6b WatchSource:0}: Error finding container 07fb4c9195f3111e975be0d4d67ac8f418ab546897a410bb0eb6ff30585cce6b: Status 404 returned error can't find the container with id 07fb4c9195f3111e975be0d4d67ac8f418ab546897a410bb0eb6ff30585cce6b Jan 29 16:45:01 crc kubenswrapper[4886]: I0129 16:45:01.037146 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463"} Jan 29 16:45:01 crc kubenswrapper[4886]: I0129 16:45:01.039123 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" event={"ID":"0a04871a-41ba-40fc-bfb0-ca8f308e9b01","Type":"ContainerStarted","Data":"07fb4c9195f3111e975be0d4d67ac8f418ab546897a410bb0eb6ff30585cce6b"} Jan 29 16:45:01 crc kubenswrapper[4886]: I0129 16:45:01.973257 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 29 16:45:01 crc kubenswrapper[4886]: I0129 16:45:01.973655 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0dd1a523-96c1-4311-9452-92e6da8a7e9b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 16:45:02 crc kubenswrapper[4886]: I0129 16:45:02.045755 4886 generic.go:334] "Generic (PLEG): container finished" podID="0a04871a-41ba-40fc-bfb0-ca8f308e9b01" containerID="11c1455f9476b08d8f802dd75f2ecc6d25f6377ab593571ce7bee30aa00fa339" exitCode=0 Jan 29 16:45:02 crc kubenswrapper[4886]: I0129 16:45:02.046281 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" event={"ID":"0a04871a-41ba-40fc-bfb0-ca8f308e9b01","Type":"ContainerDied","Data":"11c1455f9476b08d8f802dd75f2ecc6d25f6377ab593571ce7bee30aa00fa339"} Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.334716 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.519559 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfdz7\" (UniqueName: \"kubernetes.io/projected/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-kube-api-access-dfdz7\") pod \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.519672 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-config-volume\") pod \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.519738 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-secret-volume\") pod \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\" (UID: \"0a04871a-41ba-40fc-bfb0-ca8f308e9b01\") " Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.521501 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a04871a-41ba-40fc-bfb0-ca8f308e9b01" (UID: "0a04871a-41ba-40fc-bfb0-ca8f308e9b01"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.524781 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-kube-api-access-dfdz7" (OuterVolumeSpecName: "kube-api-access-dfdz7") pod "0a04871a-41ba-40fc-bfb0-ca8f308e9b01" (UID: "0a04871a-41ba-40fc-bfb0-ca8f308e9b01"). InnerVolumeSpecName "kube-api-access-dfdz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.524925 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a04871a-41ba-40fc-bfb0-ca8f308e9b01" (UID: "0a04871a-41ba-40fc-bfb0-ca8f308e9b01"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.621691 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfdz7\" (UniqueName: \"kubernetes.io/projected/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-kube-api-access-dfdz7\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.621743 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4886]: I0129 16:45:03.621771 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a04871a-41ba-40fc-bfb0-ca8f308e9b01-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:04 crc kubenswrapper[4886]: I0129 16:45:04.059221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" event={"ID":"0a04871a-41ba-40fc-bfb0-ca8f308e9b01","Type":"ContainerDied","Data":"07fb4c9195f3111e975be0d4d67ac8f418ab546897a410bb0eb6ff30585cce6b"} Jan 29 16:45:04 crc kubenswrapper[4886]: I0129 16:45:04.059257 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fb4c9195f3111e975be0d4d67ac8f418ab546897a410bb0eb6ff30585cce6b" Jan 29 16:45:04 crc kubenswrapper[4886]: I0129 16:45:04.059258 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr" Jan 29 16:45:11 crc kubenswrapper[4886]: I0129 16:45:11.977927 4886 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 29 16:45:11 crc kubenswrapper[4886]: I0129 16:45:11.978532 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0dd1a523-96c1-4311-9452-92e6da8a7e9b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 16:45:21 crc kubenswrapper[4886]: I0129 16:45:21.977096 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.224047 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-kp57g"] Jan 29 16:45:39 crc kubenswrapper[4886]: E0129 16:45:39.224867 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a04871a-41ba-40fc-bfb0-ca8f308e9b01" containerName="collect-profiles" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.224882 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a04871a-41ba-40fc-bfb0-ca8f308e9b01" containerName="collect-profiles" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.225032 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a04871a-41ba-40fc-bfb0-ca8f308e9b01" containerName="collect-profiles" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.225647 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.243583 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.244920 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.245233 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.246465 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.246685 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-vk7pr" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.260367 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.260409 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-kp57g"] Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.308179 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-kp57g"] Jan 29 16:45:39 crc kubenswrapper[4886]: E0129 16:45:39.309126 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-7ndgz metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-kp57g" podUID="0fdf3fef-2955-4239-bac3-5fa54858ca90" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.357564 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config-openshift-service-cacrt\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.357733 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-sa-token\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.357826 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0fdf3fef-2955-4239-bac3-5fa54858ca90-tmp\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.357921 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.357957 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0fdf3fef-2955-4239-bac3-5fa54858ca90-datadir\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.358003 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.358106 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndgz\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-kube-api-access-7ndgz\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.358173 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.359425 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-entrypoint\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.359476 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-token\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.360051 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-trusted-ca\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.368053 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.379762 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config-openshift-service-cacrt\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-sa-token\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463884 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0fdf3fef-2955-4239-bac3-5fa54858ca90-tmp\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463944 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0fdf3fef-2955-4239-bac3-5fa54858ca90-datadir\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463966 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463981 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ndgz\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-kube-api-access-7ndgz\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.463996 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.464009 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-entrypoint\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.464025 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-token\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.464059 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-trusted-ca\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.464504 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config-openshift-service-cacrt\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: E0129 16:45:39.464736 4886 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 29 16:45:39 crc kubenswrapper[4886]: E0129 16:45:39.464854 4886 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 29 16:45:39 crc kubenswrapper[4886]: E0129 16:45:39.464918 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics podName:0fdf3fef-2955-4239-bac3-5fa54858ca90 nodeName:}" failed. No retries permitted until 2026-01-29 16:45:39.964892587 +0000 UTC m=+1422.873611859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics") pod "collector-kp57g" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90") : secret "collector-metrics" not found Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.464957 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0fdf3fef-2955-4239-bac3-5fa54858ca90-datadir\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.465024 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: E0129 16:45:39.465198 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver podName:0fdf3fef-2955-4239-bac3-5fa54858ca90 nodeName:}" failed. No retries permitted until 2026-01-29 16:45:39.965181225 +0000 UTC m=+1422.873900517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver") pod "collector-kp57g" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90") : secret "collector-syslog-receiver" not found Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.465369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-entrypoint\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.465761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-trusted-ca\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.476873 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-token\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.477912 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0fdf3fef-2955-4239-bac3-5fa54858ca90-tmp\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.482909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ndgz\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-kube-api-access-7ndgz\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.487535 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-sa-token\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.564800 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config-openshift-service-cacrt\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565104 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-trusted-ca\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565351 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-token\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-entrypoint\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565592 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0fdf3fef-2955-4239-bac3-5fa54858ca90-datadir\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565567 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565649 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565958 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.565985 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fdf3fef-2955-4239-bac3-5fa54858ca90-datadir" (OuterVolumeSpecName: "datadir") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.566533 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config" (OuterVolumeSpecName: "config") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.568770 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-token" (OuterVolumeSpecName: "collector-token") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.667834 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ndgz\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-kube-api-access-7ndgz\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.667886 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-sa-token\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668014 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0fdf3fef-2955-4239-bac3-5fa54858ca90-tmp\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668429 4886 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668444 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668454 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668463 4886 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668473 4886 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0fdf3fef-2955-4239-bac3-5fa54858ca90-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.668480 4886 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0fdf3fef-2955-4239-bac3-5fa54858ca90-datadir\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.671190 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-kube-api-access-7ndgz" (OuterVolumeSpecName: "kube-api-access-7ndgz") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "kube-api-access-7ndgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.671246 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-sa-token" (OuterVolumeSpecName: "sa-token") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.671355 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdf3fef-2955-4239-bac3-5fa54858ca90-tmp" (OuterVolumeSpecName: "tmp") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.770062 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ndgz\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-kube-api-access-7ndgz\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.770090 4886 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf3fef-2955-4239-bac3-5fa54858ca90-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.770099 4886 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0fdf3fef-2955-4239-bac3-5fa54858ca90-tmp\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.972643 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.972774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.976020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:39 crc kubenswrapper[4886]: I0129 16:45:39.977617 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics\") pod \"collector-kp57g\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " pod="openshift-logging/collector-kp57g" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.073771 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.073992 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver\") pod \"0fdf3fef-2955-4239-bac3-5fa54858ca90\" (UID: \"0fdf3fef-2955-4239-bac3-5fa54858ca90\") " Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.077611 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.078465 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics" (OuterVolumeSpecName: "metrics") pod "0fdf3fef-2955-4239-bac3-5fa54858ca90" (UID: "0fdf3fef-2955-4239-bac3-5fa54858ca90"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.176568 4886 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.176917 4886 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0fdf3fef-2955-4239-bac3-5fa54858ca90-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.375771 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kp57g" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.445952 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-kp57g"] Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.452936 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-kp57g"] Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.467056 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-qnmmn"] Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.469020 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.472092 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.472510 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.472859 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.473361 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.473760 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-vk7pr" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.478137 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-qnmmn"] Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.480416 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.585932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsbr\" (UniqueName: \"kubernetes.io/projected/bd8dc819-215b-44f5-b758-9bac32be60f5-kube-api-access-vqsbr\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-trusted-ca\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586119 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bd8dc819-215b-44f5-b758-9bac32be60f5-datadir\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586140 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bd8dc819-215b-44f5-b758-9bac32be60f5-sa-token\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-collector-syslog-receiver\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-collector-token\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586828 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-metrics\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-config\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.586982 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-entrypoint\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.587134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-config-openshift-service-cacrt\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.587293 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd8dc819-215b-44f5-b758-9bac32be60f5-tmp\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.624631 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdf3fef-2955-4239-bac3-5fa54858ca90" path="/var/lib/kubelet/pods/0fdf3fef-2955-4239-bac3-5fa54858ca90/volumes" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.689779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bd8dc819-215b-44f5-b758-9bac32be60f5-datadir\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.689866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bd8dc819-215b-44f5-b758-9bac32be60f5-datadir\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.689868 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bd8dc819-215b-44f5-b758-9bac32be60f5-sa-token\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.689980 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-collector-syslog-receiver\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690084 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-collector-token\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690218 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-metrics\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690280 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-config\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-entrypoint\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-config-openshift-service-cacrt\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690483 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd8dc819-215b-44f5-b758-9bac32be60f5-tmp\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690718 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsbr\" (UniqueName: \"kubernetes.io/projected/bd8dc819-215b-44f5-b758-9bac32be60f5-kube-api-access-vqsbr\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.690762 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-trusted-ca\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.691745 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-config\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.691755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-config-openshift-service-cacrt\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.691975 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-trusted-ca\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.694544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-collector-token\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.695861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd8dc819-215b-44f5-b758-9bac32be60f5-tmp\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.697006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-collector-syslog-receiver\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.698288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bd8dc819-215b-44f5-b758-9bac32be60f5-metrics\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.714810 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bd8dc819-215b-44f5-b758-9bac32be60f5-entrypoint\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.719099 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsbr\" (UniqueName: \"kubernetes.io/projected/bd8dc819-215b-44f5-b758-9bac32be60f5-kube-api-access-vqsbr\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.723577 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bd8dc819-215b-44f5-b758-9bac32be60f5-sa-token\") pod \"collector-qnmmn\" (UID: \"bd8dc819-215b-44f5-b758-9bac32be60f5\") " pod="openshift-logging/collector-qnmmn" Jan 29 16:45:40 crc kubenswrapper[4886]: I0129 16:45:40.826394 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qnmmn" Jan 29 16:45:41 crc kubenswrapper[4886]: I0129 16:45:41.270773 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-qnmmn"] Jan 29 16:45:41 crc kubenswrapper[4886]: I0129 16:45:41.387822 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-qnmmn" event={"ID":"bd8dc819-215b-44f5-b758-9bac32be60f5","Type":"ContainerStarted","Data":"cb9480145b48c1c160d565f2702f69ad12d158e1ef85b91a82e365f071052f0f"} Jan 29 16:45:50 crc kubenswrapper[4886]: I0129 16:45:50.465176 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-qnmmn" event={"ID":"bd8dc819-215b-44f5-b758-9bac32be60f5","Type":"ContainerStarted","Data":"00f68f7f911c02ad1310aafa23adbce23e7c17489ab5225b4c7ab5fedca83995"} Jan 29 16:45:50 crc kubenswrapper[4886]: I0129 16:45:50.506890 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-qnmmn" podStartSLOduration=2.511114523 podStartE2EDuration="10.506859075s" podCreationTimestamp="2026-01-29 16:45:40 +0000 UTC" firstStartedPulling="2026-01-29 16:45:41.281141327 +0000 UTC m=+1424.189860609" lastFinishedPulling="2026-01-29 16:45:49.276885889 +0000 UTC m=+1432.185605161" observedRunningTime="2026-01-29 16:45:50.497789052 +0000 UTC m=+1433.406508374" watchObservedRunningTime="2026-01-29 16:45:50.506859075 +0000 UTC m=+1433.415578377" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.116308 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s4tkp"] Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.120132 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.129413 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4tkp"] Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.237501 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhpt\" (UniqueName: \"kubernetes.io/projected/70fc38f3-74c0-462d-9ad2-60f109b2d365-kube-api-access-bvhpt\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.237577 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-catalog-content\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.237648 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-utilities\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.339915 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-utilities\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.340370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhpt\" (UniqueName: \"kubernetes.io/projected/70fc38f3-74c0-462d-9ad2-60f109b2d365-kube-api-access-bvhpt\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.340521 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-utilities\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.340677 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-catalog-content\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.340909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-catalog-content\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.365635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhpt\" (UniqueName: \"kubernetes.io/projected/70fc38f3-74c0-462d-9ad2-60f109b2d365-kube-api-access-bvhpt\") pod \"redhat-marketplace-s4tkp\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.462801 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:16 crc kubenswrapper[4886]: I0129 16:46:16.915596 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4tkp"] Jan 29 16:46:17 crc kubenswrapper[4886]: I0129 16:46:17.738820 4886 generic.go:334] "Generic (PLEG): container finished" podID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerID="a6ec04dedfc222e2930d911f7475d986731b7050751d92e32b232da84ad7a329" exitCode=0 Jan 29 16:46:17 crc kubenswrapper[4886]: I0129 16:46:17.738945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4tkp" event={"ID":"70fc38f3-74c0-462d-9ad2-60f109b2d365","Type":"ContainerDied","Data":"a6ec04dedfc222e2930d911f7475d986731b7050751d92e32b232da84ad7a329"} Jan 29 16:46:17 crc kubenswrapper[4886]: I0129 16:46:17.739050 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4tkp" event={"ID":"70fc38f3-74c0-462d-9ad2-60f109b2d365","Type":"ContainerStarted","Data":"fc5358167411608003143a7e9911eec6e0a3a3cefade8c9902a65d696f96288f"} Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.570949 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.580490 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2wln4n"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.589901 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.598182 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bn8v4t"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.607145 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.643206 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b00b2947-6947-4d0a-b2d9-42adefd8ebb3" path="/var/lib/kubelet/pods/b00b2947-6947-4d0a-b2d9-42adefd8ebb3/volumes" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.644241 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c5874b-97c3-4f3e-8e88-68c3653a6c4a" path="/var/lib/kubelet/pods/e6c5874b-97c3-4f3e-8e88-68c3653a6c4a/volumes" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.644829 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08c2snz"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.644859 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfv6k"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.644879 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5hs7"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.644889 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtk7r"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.645060 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" podUID="42b8dc70-b29d-4995-9727-9b8e032bdad9" containerName="marketplace-operator" containerID="cri-o://f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e" gracePeriod=30 Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.645254 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jfv6k" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="registry-server" containerID="cri-o://735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef" gracePeriod=30 Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.645717 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5hs7" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="registry-server" containerID="cri-o://efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e" gracePeriod=30 Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.655027 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qbl4"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.656455 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qbl4" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="registry-server" containerID="cri-o://26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d" gracePeriod=30 Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.664571 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4tkp"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.673498 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m8snn"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.675118 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.680168 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkk68"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.680477 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zkk68" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="registry-server" containerID="cri-o://29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990" gracePeriod=30 Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.686795 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m8snn"] Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.689006 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.689103 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.689128 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tz68\" (UniqueName: \"kubernetes.io/projected/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-kube-api-access-6tz68\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.746450 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4tkp" event={"ID":"70fc38f3-74c0-462d-9ad2-60f109b2d365","Type":"ContainerStarted","Data":"cd0174e3243b8d22b133a543427ce03858c997e6e589bac4aa5cc61f6f83f38c"} Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.790159 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.790223 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tz68\" (UniqueName: \"kubernetes.io/projected/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-kube-api-access-6tz68\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.790310 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.791981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.801749 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:18 crc kubenswrapper[4886]: I0129 16:46:18.811392 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tz68\" (UniqueName: \"kubernetes.io/projected/9cb13d4a-3940-45ef-9135-ff94c6a75b0c-kube-api-access-6tz68\") pod \"marketplace-operator-79b997595-m8snn\" (UID: \"9cb13d4a-3940-45ef-9135-ff94c6a75b0c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.196307 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.197602 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.209152 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.212221 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.234785 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.248486 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402122 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-trusted-ca\") pod \"42b8dc70-b29d-4995-9727-9b8e032bdad9\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402435 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-operator-metrics\") pod \"42b8dc70-b29d-4995-9727-9b8e032bdad9\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402463 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-catalog-content\") pod \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402502 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-catalog-content\") pod \"69003a39-1c09-4087-a494-ebfd69e973cf\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402521 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-utilities\") pod \"a7325ad0-28bf-45e0-bbd5-160f441de091\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402564 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mlnk\" (UniqueName: \"kubernetes.io/projected/69003a39-1c09-4087-a494-ebfd69e973cf-kube-api-access-5mlnk\") pod \"69003a39-1c09-4087-a494-ebfd69e973cf\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402615 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-catalog-content\") pod \"a7325ad0-28bf-45e0-bbd5-160f441de091\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402637 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-utilities\") pod \"69003a39-1c09-4087-a494-ebfd69e973cf\" (UID: \"69003a39-1c09-4087-a494-ebfd69e973cf\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402921 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-kube-api-access-vn92n\") pod \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402943 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-utilities\") pod \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402971 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzm6k\" (UniqueName: \"kubernetes.io/projected/42b8dc70-b29d-4995-9727-9b8e032bdad9-kube-api-access-pzm6k\") pod \"42b8dc70-b29d-4995-9727-9b8e032bdad9\" (UID: \"42b8dc70-b29d-4995-9727-9b8e032bdad9\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402987 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-catalog-content\") pod \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.403120 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-utilities\") pod \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\" (UID: \"d84ce3e9-c41a-4a08-8d86-2a918d5e9450\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.403147 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jsj\" (UniqueName: \"kubernetes.io/projected/a7325ad0-28bf-45e0-bbd5-160f441de091-kube-api-access-c8jsj\") pod \"a7325ad0-28bf-45e0-bbd5-160f441de091\" (UID: \"a7325ad0-28bf-45e0-bbd5-160f441de091\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.403170 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf7sq\" (UniqueName: \"kubernetes.io/projected/57aa9115-b2d5-45aa-8ac3-e251c0907e45-kube-api-access-vf7sq\") pod \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\" (UID: \"57aa9115-b2d5-45aa-8ac3-e251c0907e45\") " Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.402707 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "42b8dc70-b29d-4995-9727-9b8e032bdad9" (UID: "42b8dc70-b29d-4995-9727-9b8e032bdad9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.403418 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-utilities" (OuterVolumeSpecName: "utilities") pod "a7325ad0-28bf-45e0-bbd5-160f441de091" (UID: "a7325ad0-28bf-45e0-bbd5-160f441de091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.406209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "42b8dc70-b29d-4995-9727-9b8e032bdad9" (UID: "42b8dc70-b29d-4995-9727-9b8e032bdad9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.407130 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-utilities" (OuterVolumeSpecName: "utilities") pod "d84ce3e9-c41a-4a08-8d86-2a918d5e9450" (UID: "d84ce3e9-c41a-4a08-8d86-2a918d5e9450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.410042 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7325ad0-28bf-45e0-bbd5-160f441de091-kube-api-access-c8jsj" (OuterVolumeSpecName: "kube-api-access-c8jsj") pod "a7325ad0-28bf-45e0-bbd5-160f441de091" (UID: "a7325ad0-28bf-45e0-bbd5-160f441de091"). InnerVolumeSpecName "kube-api-access-c8jsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.410110 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57aa9115-b2d5-45aa-8ac3-e251c0907e45-kube-api-access-vf7sq" (OuterVolumeSpecName: "kube-api-access-vf7sq") pod "57aa9115-b2d5-45aa-8ac3-e251c0907e45" (UID: "57aa9115-b2d5-45aa-8ac3-e251c0907e45"). InnerVolumeSpecName "kube-api-access-vf7sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.410795 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-utilities" (OuterVolumeSpecName: "utilities") pod "69003a39-1c09-4087-a494-ebfd69e973cf" (UID: "69003a39-1c09-4087-a494-ebfd69e973cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.412082 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-utilities" (OuterVolumeSpecName: "utilities") pod "57aa9115-b2d5-45aa-8ac3-e251c0907e45" (UID: "57aa9115-b2d5-45aa-8ac3-e251c0907e45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.413945 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-kube-api-access-vn92n" (OuterVolumeSpecName: "kube-api-access-vn92n") pod "d84ce3e9-c41a-4a08-8d86-2a918d5e9450" (UID: "d84ce3e9-c41a-4a08-8d86-2a918d5e9450"). InnerVolumeSpecName "kube-api-access-vn92n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.414651 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b8dc70-b29d-4995-9727-9b8e032bdad9-kube-api-access-pzm6k" (OuterVolumeSpecName: "kube-api-access-pzm6k") pod "42b8dc70-b29d-4995-9727-9b8e032bdad9" (UID: "42b8dc70-b29d-4995-9727-9b8e032bdad9"). InnerVolumeSpecName "kube-api-access-pzm6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.420814 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69003a39-1c09-4087-a494-ebfd69e973cf-kube-api-access-5mlnk" (OuterVolumeSpecName: "kube-api-access-5mlnk") pod "69003a39-1c09-4087-a494-ebfd69e973cf" (UID: "69003a39-1c09-4087-a494-ebfd69e973cf"). InnerVolumeSpecName "kube-api-access-5mlnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.439576 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57aa9115-b2d5-45aa-8ac3-e251c0907e45" (UID: "57aa9115-b2d5-45aa-8ac3-e251c0907e45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.459026 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69003a39-1c09-4087-a494-ebfd69e973cf" (UID: "69003a39-1c09-4087-a494-ebfd69e973cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.461427 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7325ad0-28bf-45e0-bbd5-160f441de091" (UID: "a7325ad0-28bf-45e0-bbd5-160f441de091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504634 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504680 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42b8dc70-b29d-4995-9727-9b8e032bdad9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504698 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504711 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504725 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mlnk\" (UniqueName: \"kubernetes.io/projected/69003a39-1c09-4087-a494-ebfd69e973cf-kube-api-access-5mlnk\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504737 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7325ad0-28bf-45e0-bbd5-160f441de091-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504748 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69003a39-1c09-4087-a494-ebfd69e973cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504760 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-kube-api-access-vn92n\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504771 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504782 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzm6k\" (UniqueName: \"kubernetes.io/projected/42b8dc70-b29d-4995-9727-9b8e032bdad9-kube-api-access-pzm6k\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504794 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa9115-b2d5-45aa-8ac3-e251c0907e45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504806 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504817 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jsj\" (UniqueName: \"kubernetes.io/projected/a7325ad0-28bf-45e0-bbd5-160f441de091-kube-api-access-c8jsj\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.504829 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf7sq\" (UniqueName: \"kubernetes.io/projected/57aa9115-b2d5-45aa-8ac3-e251c0907e45-kube-api-access-vf7sq\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.521773 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d84ce3e9-c41a-4a08-8d86-2a918d5e9450" (UID: "d84ce3e9-c41a-4a08-8d86-2a918d5e9450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.605737 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84ce3e9-c41a-4a08-8d86-2a918d5e9450-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.697647 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m8snn"] Jan 29 16:46:19 crc kubenswrapper[4886]: W0129 16:46:19.700737 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb13d4a_3940_45ef_9135_ff94c6a75b0c.slice/crio-7413b62657ae27eb3cf801eb842106f18c56c183ec06f3f9275517ece6cc636b WatchSource:0}: Error finding container 7413b62657ae27eb3cf801eb842106f18c56c183ec06f3f9275517ece6cc636b: Status 404 returned error can't find the container with id 7413b62657ae27eb3cf801eb842106f18c56c183ec06f3f9275517ece6cc636b Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.755238 4886 generic.go:334] "Generic (PLEG): container finished" podID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerID="cd0174e3243b8d22b133a543427ce03858c997e6e589bac4aa5cc61f6f83f38c" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.755314 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4tkp" event={"ID":"70fc38f3-74c0-462d-9ad2-60f109b2d365","Type":"ContainerDied","Data":"cd0174e3243b8d22b133a543427ce03858c997e6e589bac4aa5cc61f6f83f38c"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.758479 4886 generic.go:334] "Generic (PLEG): container finished" podID="42b8dc70-b29d-4995-9727-9b8e032bdad9" containerID="f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.758595 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" event={"ID":"42b8dc70-b29d-4995-9727-9b8e032bdad9","Type":"ContainerDied","Data":"f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.758621 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" event={"ID":"42b8dc70-b29d-4995-9727-9b8e032bdad9","Type":"ContainerDied","Data":"648bc592f49ae3cedaf90d37922cbc1e1495121ad8e957f81f4908846b5e05da"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.758623 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtk7r" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.758648 4886 scope.go:117] "RemoveContainer" containerID="f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.768933 4886 generic.go:334] "Generic (PLEG): container finished" podID="69003a39-1c09-4087-a494-ebfd69e973cf" containerID="735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.769022 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfv6k" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.768983 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfv6k" event={"ID":"69003a39-1c09-4087-a494-ebfd69e973cf","Type":"ContainerDied","Data":"735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.769095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfv6k" event={"ID":"69003a39-1c09-4087-a494-ebfd69e973cf","Type":"ContainerDied","Data":"e4d88167fe4815cd042b435714fee0326b8557c7e5fb2b46e9557a042ac995f8"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.784539 4886 generic.go:334] "Generic (PLEG): container finished" podID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerID="29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.784664 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkk68" event={"ID":"d84ce3e9-c41a-4a08-8d86-2a918d5e9450","Type":"ContainerDied","Data":"29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.784709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkk68" event={"ID":"d84ce3e9-c41a-4a08-8d86-2a918d5e9450","Type":"ContainerDied","Data":"1de9e48715ad861e4d8bd78cecc12c2dcf52cdf92d4274338ddeebf931d7420d"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.784861 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkk68" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.794009 4886 scope.go:117] "RemoveContainer" containerID="f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.794444 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" event={"ID":"9cb13d4a-3940-45ef-9135-ff94c6a75b0c","Type":"ContainerStarted","Data":"7413b62657ae27eb3cf801eb842106f18c56c183ec06f3f9275517ece6cc636b"} Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.794787 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e\": container with ID starting with f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e not found: ID does not exist" containerID="f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.794936 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e"} err="failed to get container status \"f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e\": rpc error: code = NotFound desc = could not find container \"f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e\": container with ID starting with f67a42038126009d6221ae06e997c4b3a4d04b56f64c29fbc910653a5611145e not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.794970 4886 scope.go:117] "RemoveContainer" containerID="735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.800895 4886 generic.go:334] "Generic (PLEG): container finished" podID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerID="efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.800989 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerDied","Data":"efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.801016 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5hs7" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.801049 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5hs7" event={"ID":"a7325ad0-28bf-45e0-bbd5-160f441de091","Type":"ContainerDied","Data":"58e358a0eb4540bb049b243d60b0ba858eec19efdffef34538e1bbcdff0edbc6"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.805565 4886 generic.go:334] "Generic (PLEG): container finished" podID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerID="26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.805659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerDied","Data":"26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.805716 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qbl4" event={"ID":"57aa9115-b2d5-45aa-8ac3-e251c0907e45","Type":"ContainerDied","Data":"68d81ee76eccd615ba9046c4c1e6648df9ef22ce6eee6d566d9309dd619e6010"} Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.805654 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qbl4" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.825276 4886 scope.go:117] "RemoveContainer" containerID="9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.836438 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtk7r"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.841293 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtk7r"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.895265 4886 scope.go:117] "RemoveContainer" containerID="9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.911939 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5hs7"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.921943 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5hs7"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.933446 4886 scope.go:117] "RemoveContainer" containerID="735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.934292 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfv6k"] Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.938536 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef\": container with ID starting with 735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef not found: ID does not exist" containerID="735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.938569 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef"} err="failed to get container status \"735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef\": rpc error: code = NotFound desc = could not find container \"735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef\": container with ID starting with 735ad1f3c641d99dc2e721ad33c111100670ea307d45a8bb7eba837fe9c269ef not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.938592 4886 scope.go:117] "RemoveContainer" containerID="9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f" Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.939856 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f\": container with ID starting with 9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f not found: ID does not exist" containerID="9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.945434 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jfv6k"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.948242 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f"} err="failed to get container status \"9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f\": rpc error: code = NotFound desc = could not find container \"9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f\": container with ID starting with 9bd48ab4996ca74fa989778e83dba86fbb2f2ad2104534befcf501673ddd232f not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.948321 4886 scope.go:117] "RemoveContainer" containerID="9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.948496 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkk68"] Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.949051 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647\": container with ID starting with 9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647 not found: ID does not exist" containerID="9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.949082 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647"} err="failed to get container status \"9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647\": rpc error: code = NotFound desc = could not find container \"9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647\": container with ID starting with 9dc94c69454cda473e048b5be83a123e92e3d4dcc0206e5c91ebde5e727d2647 not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.949101 4886 scope.go:117] "RemoveContainer" containerID="29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.952947 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zkk68"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.958013 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qbl4"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.961573 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qbl4"] Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.964169 4886 scope.go:117] "RemoveContainer" containerID="0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.980341 4886 scope.go:117] "RemoveContainer" containerID="9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.994424 4886 scope.go:117] "RemoveContainer" containerID="29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990" Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.994996 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990\": container with ID starting with 29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990 not found: ID does not exist" containerID="29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.995113 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990"} err="failed to get container status \"29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990\": rpc error: code = NotFound desc = could not find container \"29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990\": container with ID starting with 29f7d7e31f9e12ad7f76231137a2e9a61ff5af739a92e0ab7f9fef0c87106990 not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.995155 4886 scope.go:117] "RemoveContainer" containerID="0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c" Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.995483 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c\": container with ID starting with 0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c not found: ID does not exist" containerID="0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.995509 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c"} err="failed to get container status \"0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c\": rpc error: code = NotFound desc = could not find container \"0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c\": container with ID starting with 0fa864e4732d0bb9a1a68d7843a62bc56027d9ccdfea2ad23148f5d87b7ecd0c not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.995529 4886 scope.go:117] "RemoveContainer" containerID="9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6" Jan 29 16:46:19 crc kubenswrapper[4886]: E0129 16:46:19.995732 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6\": container with ID starting with 9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6 not found: ID does not exist" containerID="9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.995753 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6"} err="failed to get container status \"9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6\": rpc error: code = NotFound desc = could not find container \"9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6\": container with ID starting with 9771013e1661afa4b7f2a5038c24d8397533ccd7c529146bb8fb2adf4c78bad6 not found: ID does not exist" Jan 29 16:46:19 crc kubenswrapper[4886]: I0129 16:46:19.995766 4886 scope.go:117] "RemoveContainer" containerID="efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.012628 4886 scope.go:117] "RemoveContainer" containerID="35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.014361 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.035185 4886 scope.go:117] "RemoveContainer" containerID="bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.056014 4886 scope.go:117] "RemoveContainer" containerID="efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.056932 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e\": container with ID starting with efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e not found: ID does not exist" containerID="efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.056989 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e"} err="failed to get container status \"efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e\": rpc error: code = NotFound desc = could not find container \"efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e\": container with ID starting with efe76a3e970848dc3228f84915fb95af5f8ed14f0bcb5b641221638cab0f714e not found: ID does not exist" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.057019 4886 scope.go:117] "RemoveContainer" containerID="35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.057589 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88\": container with ID starting with 35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88 not found: ID does not exist" containerID="35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.057654 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88"} err="failed to get container status \"35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88\": rpc error: code = NotFound desc = could not find container \"35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88\": container with ID starting with 35212758091bf8c3d45fb0a080810d5fded73e71ef6c555edea92ef2d2dcec88 not found: ID does not exist" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.057706 4886 scope.go:117] "RemoveContainer" containerID="bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.058275 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d\": container with ID starting with bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d not found: ID does not exist" containerID="bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.058304 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d"} err="failed to get container status \"bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d\": rpc error: code = NotFound desc = could not find container \"bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d\": container with ID starting with bd8b45bdbc53c5a19f5d9b16c77f16088c5159f9cfac3b1dd35c0f4cdab8672d not found: ID does not exist" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.058394 4886 scope.go:117] "RemoveContainer" containerID="26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.077581 4886 scope.go:117] "RemoveContainer" containerID="d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.093561 4886 scope.go:117] "RemoveContainer" containerID="9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.111420 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-catalog-content\") pod \"70fc38f3-74c0-462d-9ad2-60f109b2d365\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.111449 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-utilities\") pod \"70fc38f3-74c0-462d-9ad2-60f109b2d365\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.111470 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvhpt\" (UniqueName: \"kubernetes.io/projected/70fc38f3-74c0-462d-9ad2-60f109b2d365-kube-api-access-bvhpt\") pod \"70fc38f3-74c0-462d-9ad2-60f109b2d365\" (UID: \"70fc38f3-74c0-462d-9ad2-60f109b2d365\") " Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.111472 4886 scope.go:117] "RemoveContainer" containerID="26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.111809 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d\": container with ID starting with 26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d not found: ID does not exist" containerID="26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.111861 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d"} err="failed to get container status \"26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d\": rpc error: code = NotFound desc = could not find container \"26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d\": container with ID starting with 26900ab338bee6799e69566c733a5063575a2c6eeacf71f0f523248ae71b1b2d not found: ID does not exist" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.111878 4886 scope.go:117] "RemoveContainer" containerID="d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.112227 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4\": container with ID starting with d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4 not found: ID does not exist" containerID="d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.112241 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4"} err="failed to get container status \"d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4\": rpc error: code = NotFound desc = could not find container \"d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4\": container with ID starting with d611665f3c9d008d6e151d05993039687945f7572ec764930a3d9ccea183c1b4 not found: ID does not exist" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.112254 4886 scope.go:117] "RemoveContainer" containerID="9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.112493 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1\": container with ID starting with 9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1 not found: ID does not exist" containerID="9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.112510 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1"} err="failed to get container status \"9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1\": rpc error: code = NotFound desc = could not find container \"9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1\": container with ID starting with 9483d17c90afb2d261251cb57ed87c956106b0b7bb964afcffdf0a2d1b5b13c1 not found: ID does not exist" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.114208 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-utilities" (OuterVolumeSpecName: "utilities") pod "70fc38f3-74c0-462d-9ad2-60f109b2d365" (UID: "70fc38f3-74c0-462d-9ad2-60f109b2d365"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.118086 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fc38f3-74c0-462d-9ad2-60f109b2d365-kube-api-access-bvhpt" (OuterVolumeSpecName: "kube-api-access-bvhpt") pod "70fc38f3-74c0-462d-9ad2-60f109b2d365" (UID: "70fc38f3-74c0-462d-9ad2-60f109b2d365"). InnerVolumeSpecName "kube-api-access-bvhpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.150408 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70fc38f3-74c0-462d-9ad2-60f109b2d365" (UID: "70fc38f3-74c0-462d-9ad2-60f109b2d365"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.212265 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.212299 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70fc38f3-74c0-462d-9ad2-60f109b2d365-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.212309 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvhpt\" (UniqueName: \"kubernetes.io/projected/70fc38f3-74c0-462d-9ad2-60f109b2d365-kube-api-access-bvhpt\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.633443 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a67e3b-3393-4dea-81c8-42c2e22ad315" path="/var/lib/kubelet/pods/20a67e3b-3393-4dea-81c8-42c2e22ad315/volumes" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.634790 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b8dc70-b29d-4995-9727-9b8e032bdad9" path="/var/lib/kubelet/pods/42b8dc70-b29d-4995-9727-9b8e032bdad9/volumes" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.635774 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" path="/var/lib/kubelet/pods/57aa9115-b2d5-45aa-8ac3-e251c0907e45/volumes" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.637756 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" path="/var/lib/kubelet/pods/69003a39-1c09-4087-a494-ebfd69e973cf/volumes" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.638981 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" path="/var/lib/kubelet/pods/a7325ad0-28bf-45e0-bbd5-160f441de091/volumes" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.641002 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" path="/var/lib/kubelet/pods/d84ce3e9-c41a-4a08-8d86-2a918d5e9450/volumes" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.818646 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" event={"ID":"9cb13d4a-3940-45ef-9135-ff94c6a75b0c","Type":"ContainerStarted","Data":"b1c2b8fd07bb7f6da16b71e8f971678bad1efd8c3f30512159a263059ee2d77a"} Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.818890 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.824592 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4tkp" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.824863 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4tkp" event={"ID":"70fc38f3-74c0-462d-9ad2-60f109b2d365","Type":"ContainerDied","Data":"fc5358167411608003143a7e9911eec6e0a3a3cefade8c9902a65d696f96288f"} Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.824940 4886 scope.go:117] "RemoveContainer" containerID="cd0174e3243b8d22b133a543427ce03858c997e6e589bac4aa5cc61f6f83f38c" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.828227 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.865754 4886 scope.go:117] "RemoveContainer" containerID="a6ec04dedfc222e2930d911f7475d986731b7050751d92e32b232da84ad7a329" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.882843 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m8snn" podStartSLOduration=2.882820359 podStartE2EDuration="2.882820359s" podCreationTimestamp="2026-01-29 16:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:46:20.857045806 +0000 UTC m=+1463.765765128" watchObservedRunningTime="2026-01-29 16:46:20.882820359 +0000 UTC m=+1463.791539641" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.883867 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ws2lm"] Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884171 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884188 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884197 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884205 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884221 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884228 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884243 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b8dc70-b29d-4995-9727-9b8e032bdad9" containerName="marketplace-operator" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884252 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b8dc70-b29d-4995-9727-9b8e032bdad9" containerName="marketplace-operator" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884261 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884270 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884281 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884288 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884301 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884307 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884316 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884340 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884351 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884358 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884365 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884372 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884389 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884397 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="extract-utilities" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884406 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884413 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884426 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884433 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884631 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884638 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: E0129 16:46:20.884645 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884652 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884806 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="57aa9115-b2d5-45aa-8ac3-e251c0907e45" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884819 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fc38f3-74c0-462d-9ad2-60f109b2d365" containerName="extract-content" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884833 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b8dc70-b29d-4995-9727-9b8e032bdad9" containerName="marketplace-operator" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884843 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="69003a39-1c09-4087-a494-ebfd69e973cf" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884853 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ce3e9-c41a-4a08-8d86-2a918d5e9450" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.884863 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7325ad0-28bf-45e0-bbd5-160f441de091" containerName="registry-server" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.886024 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.892786 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.922887 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-catalog-content\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.923236 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwj9l\" (UniqueName: \"kubernetes.io/projected/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-kube-api-access-vwj9l\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.923276 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-utilities\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.927763 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ws2lm"] Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.942373 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4tkp"] Jan 29 16:46:20 crc kubenswrapper[4886]: I0129 16:46:20.948647 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4tkp"] Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.023995 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwj9l\" (UniqueName: \"kubernetes.io/projected/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-kube-api-access-vwj9l\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.024052 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-utilities\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.024097 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-catalog-content\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.024772 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-utilities\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.024788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-catalog-content\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.040593 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwj9l\" (UniqueName: \"kubernetes.io/projected/d8ab6536-f9ab-4191-9c15-f3fe0453e7d0-kube-api-access-vwj9l\") pod \"certified-operators-ws2lm\" (UID: \"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0\") " pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.254575 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.720592 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ws2lm"] Jan 29 16:46:21 crc kubenswrapper[4886]: I0129 16:46:21.836884 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws2lm" event={"ID":"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0","Type":"ContainerStarted","Data":"3e21e164e499c1d13413fe08e994414a06b124f6e168c863d9cce408a4c23cd1"} Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.287101 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vnttp"] Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.288364 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.291294 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.299283 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnttp"] Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.357073 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzhg\" (UniqueName: \"kubernetes.io/projected/fbfc768f-4803-4f4e-9019-2aacda68bc47-kube-api-access-4xzhg\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.357157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfc768f-4803-4f4e-9019-2aacda68bc47-catalog-content\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.357188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfc768f-4803-4f4e-9019-2aacda68bc47-utilities\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.458974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfc768f-4803-4f4e-9019-2aacda68bc47-catalog-content\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.459046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfc768f-4803-4f4e-9019-2aacda68bc47-utilities\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.459143 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzhg\" (UniqueName: \"kubernetes.io/projected/fbfc768f-4803-4f4e-9019-2aacda68bc47-kube-api-access-4xzhg\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.460093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfc768f-4803-4f4e-9019-2aacda68bc47-catalog-content\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.460102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfc768f-4803-4f4e-9019-2aacda68bc47-utilities\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.483597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzhg\" (UniqueName: \"kubernetes.io/projected/fbfc768f-4803-4f4e-9019-2aacda68bc47-kube-api-access-4xzhg\") pod \"community-operators-vnttp\" (UID: \"fbfc768f-4803-4f4e-9019-2aacda68bc47\") " pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.615897 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.627314 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70fc38f3-74c0-462d-9ad2-60f109b2d365" path="/var/lib/kubelet/pods/70fc38f3-74c0-462d-9ad2-60f109b2d365/volumes" Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.846881 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8ab6536-f9ab-4191-9c15-f3fe0453e7d0" containerID="039d2652c8a0923a767a8f904be9db7661ebaebd943eeea44963f20c2ca8a4e7" exitCode=0 Jan 29 16:46:22 crc kubenswrapper[4886]: I0129 16:46:22.846916 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws2lm" event={"ID":"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0","Type":"ContainerDied","Data":"039d2652c8a0923a767a8f904be9db7661ebaebd943eeea44963f20c2ca8a4e7"} Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.067209 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnttp"] Jan 29 16:46:23 crc kubenswrapper[4886]: W0129 16:46:23.078471 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbfc768f_4803_4f4e_9019_2aacda68bc47.slice/crio-d40c7ee3bf4d4b4b9d77673f8aaefd16f5cb607897cbf316986478e281bb9b0e WatchSource:0}: Error finding container d40c7ee3bf4d4b4b9d77673f8aaefd16f5cb607897cbf316986478e281bb9b0e: Status 404 returned error can't find the container with id d40c7ee3bf4d4b4b9d77673f8aaefd16f5cb607897cbf316986478e281bb9b0e Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.288402 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6bdhs"] Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.290405 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.293410 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.300463 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bdhs"] Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.478627 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e49770-fa31-4780-a5ac-38a6bc1221a9-catalog-content\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.478703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qhg\" (UniqueName: \"kubernetes.io/projected/80e49770-fa31-4780-a5ac-38a6bc1221a9-kube-api-access-65qhg\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.478749 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e49770-fa31-4780-a5ac-38a6bc1221a9-utilities\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.580770 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e49770-fa31-4780-a5ac-38a6bc1221a9-catalog-content\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.580876 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qhg\" (UniqueName: \"kubernetes.io/projected/80e49770-fa31-4780-a5ac-38a6bc1221a9-kube-api-access-65qhg\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.580961 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e49770-fa31-4780-a5ac-38a6bc1221a9-utilities\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.581717 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80e49770-fa31-4780-a5ac-38a6bc1221a9-catalog-content\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.581741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80e49770-fa31-4780-a5ac-38a6bc1221a9-utilities\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.609770 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qhg\" (UniqueName: \"kubernetes.io/projected/80e49770-fa31-4780-a5ac-38a6bc1221a9-kube-api-access-65qhg\") pod \"redhat-operators-6bdhs\" (UID: \"80e49770-fa31-4780-a5ac-38a6bc1221a9\") " pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.615020 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:23 crc kubenswrapper[4886]: I0129 16:46:23.857486 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnttp" event={"ID":"fbfc768f-4803-4f4e-9019-2aacda68bc47","Type":"ContainerStarted","Data":"d40c7ee3bf4d4b4b9d77673f8aaefd16f5cb607897cbf316986478e281bb9b0e"} Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.142991 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bdhs"] Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.681685 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-52bfx"] Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.683297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.686187 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.687521 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52bfx"] Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.799637 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-utilities\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.799946 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7694n\" (UniqueName: \"kubernetes.io/projected/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-kube-api-access-7694n\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.799982 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-catalog-content\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.868535 4886 generic.go:334] "Generic (PLEG): container finished" podID="fbfc768f-4803-4f4e-9019-2aacda68bc47" containerID="d660e8ba51141212057357f1c6afcfdf2f206393e2a4f6b098221cfd1be48212" exitCode=0 Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.868606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnttp" event={"ID":"fbfc768f-4803-4f4e-9019-2aacda68bc47","Type":"ContainerDied","Data":"d660e8ba51141212057357f1c6afcfdf2f206393e2a4f6b098221cfd1be48212"} Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.870975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bdhs" event={"ID":"80e49770-fa31-4780-a5ac-38a6bc1221a9","Type":"ContainerStarted","Data":"bb10669d5c9319d3f6b647732aa83aaed3939b3c1381053c9f2eca3c370d3282"} Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.870999 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bdhs" event={"ID":"80e49770-fa31-4780-a5ac-38a6bc1221a9","Type":"ContainerStarted","Data":"0a0fa418ed3ea00bd740848278269fe5bbbe31cf0912ca198a306059478ec782"} Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.901245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7694n\" (UniqueName: \"kubernetes.io/projected/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-kube-api-access-7694n\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.901307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-catalog-content\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.901487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-utilities\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.902255 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-utilities\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.902414 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-catalog-content\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:24 crc kubenswrapper[4886]: I0129 16:46:24.919005 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7694n\" (UniqueName: \"kubernetes.io/projected/87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca-kube-api-access-7694n\") pod \"redhat-marketplace-52bfx\" (UID: \"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca\") " pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:25 crc kubenswrapper[4886]: I0129 16:46:25.049750 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:25 crc kubenswrapper[4886]: I0129 16:46:25.512398 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52bfx"] Jan 29 16:46:25 crc kubenswrapper[4886]: W0129 16:46:25.524712 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b65e80_b30f_4ac4_bb06_ec8eb04cd7ca.slice/crio-2fa2b8c2ffecf9cfabe0d29fe2ec3fcc727cf17a5638653e0dda06d83e26ae2e WatchSource:0}: Error finding container 2fa2b8c2ffecf9cfabe0d29fe2ec3fcc727cf17a5638653e0dda06d83e26ae2e: Status 404 returned error can't find the container with id 2fa2b8c2ffecf9cfabe0d29fe2ec3fcc727cf17a5638653e0dda06d83e26ae2e Jan 29 16:46:25 crc kubenswrapper[4886]: I0129 16:46:25.879128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52bfx" event={"ID":"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca","Type":"ContainerStarted","Data":"2fa2b8c2ffecf9cfabe0d29fe2ec3fcc727cf17a5638653e0dda06d83e26ae2e"} Jan 29 16:46:25 crc kubenswrapper[4886]: I0129 16:46:25.881045 4886 generic.go:334] "Generic (PLEG): container finished" podID="80e49770-fa31-4780-a5ac-38a6bc1221a9" containerID="bb10669d5c9319d3f6b647732aa83aaed3939b3c1381053c9f2eca3c370d3282" exitCode=0 Jan 29 16:46:25 crc kubenswrapper[4886]: I0129 16:46:25.881075 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bdhs" event={"ID":"80e49770-fa31-4780-a5ac-38a6bc1221a9","Type":"ContainerDied","Data":"bb10669d5c9319d3f6b647732aa83aaed3939b3c1381053c9f2eca3c370d3282"} Jan 29 16:46:26 crc kubenswrapper[4886]: I0129 16:46:26.892245 4886 generic.go:334] "Generic (PLEG): container finished" podID="87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca" containerID="465a7f1f1f8324da6688bb49b19359ff8dfdf2d01808f80da09155338e2c3325" exitCode=0 Jan 29 16:46:26 crc kubenswrapper[4886]: I0129 16:46:26.892346 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52bfx" event={"ID":"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca","Type":"ContainerDied","Data":"465a7f1f1f8324da6688bb49b19359ff8dfdf2d01808f80da09155338e2c3325"} Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.955538 4886 generic.go:334] "Generic (PLEG): container finished" podID="80e49770-fa31-4780-a5ac-38a6bc1221a9" containerID="678b6453290fdf5637a9f4f9fc3768a75a11de08b2393b56c97065c9afb6c6c5" exitCode=0 Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.955578 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bdhs" event={"ID":"80e49770-fa31-4780-a5ac-38a6bc1221a9","Type":"ContainerDied","Data":"678b6453290fdf5637a9f4f9fc3768a75a11de08b2393b56c97065c9afb6c6c5"} Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.959905 4886 generic.go:334] "Generic (PLEG): container finished" podID="87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca" containerID="c271f5517b1a393ad7a319989ad78bb14460e266f8b7d0dd30fa11b2117eed12" exitCode=0 Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.960022 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52bfx" event={"ID":"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca","Type":"ContainerDied","Data":"c271f5517b1a393ad7a319989ad78bb14460e266f8b7d0dd30fa11b2117eed12"} Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.964729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnttp" event={"ID":"fbfc768f-4803-4f4e-9019-2aacda68bc47","Type":"ContainerDied","Data":"739e0c3bda5b06aeff00908644b4d9e39c1f3a83a5a16cd5592a2b3a0a84edfd"} Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.965507 4886 generic.go:334] "Generic (PLEG): container finished" podID="fbfc768f-4803-4f4e-9019-2aacda68bc47" containerID="739e0c3bda5b06aeff00908644b4d9e39c1f3a83a5a16cd5592a2b3a0a84edfd" exitCode=0 Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.968588 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8ab6536-f9ab-4191-9c15-f3fe0453e7d0" containerID="01d7355dcfd37a7bab0f2bcc4a2027184d154d94d7fe052a3562aac5da1f3ea9" exitCode=0 Jan 29 16:46:32 crc kubenswrapper[4886]: I0129 16:46:32.968619 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws2lm" event={"ID":"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0","Type":"ContainerDied","Data":"01d7355dcfd37a7bab0f2bcc4a2027184d154d94d7fe052a3562aac5da1f3ea9"} Jan 29 16:46:33 crc kubenswrapper[4886]: I0129 16:46:33.976620 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52bfx" event={"ID":"87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca","Type":"ContainerStarted","Data":"c2358db1a793cf91ba9b1970509b2d9ead3a2a92dd1c2dd79c206d3c2ac53fe1"} Jan 29 16:46:33 crc kubenswrapper[4886]: I0129 16:46:33.979685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnttp" event={"ID":"fbfc768f-4803-4f4e-9019-2aacda68bc47","Type":"ContainerStarted","Data":"fd2a379c76b14741304253025eccc7f873d5f70c10124608ac47d2565d5b17aa"} Jan 29 16:46:33 crc kubenswrapper[4886]: I0129 16:46:33.982202 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws2lm" event={"ID":"d8ab6536-f9ab-4191-9c15-f3fe0453e7d0","Type":"ContainerStarted","Data":"c4980d3736fdac0444c07a1fb0ca4e2f07d9f6fe2014605185318260906ccd7f"} Jan 29 16:46:34 crc kubenswrapper[4886]: I0129 16:46:34.000099 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-52bfx" podStartSLOduration=4.201101385 podStartE2EDuration="10.000082429s" podCreationTimestamp="2026-01-29 16:46:24 +0000 UTC" firstStartedPulling="2026-01-29 16:46:27.724131115 +0000 UTC m=+1470.632850397" lastFinishedPulling="2026-01-29 16:46:33.523112169 +0000 UTC m=+1476.431831441" observedRunningTime="2026-01-29 16:46:33.99468046 +0000 UTC m=+1476.903399732" watchObservedRunningTime="2026-01-29 16:46:34.000082429 +0000 UTC m=+1476.908801721" Jan 29 16:46:34 crc kubenswrapper[4886]: I0129 16:46:34.019177 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ws2lm" podStartSLOduration=4.376141396 podStartE2EDuration="14.01915911s" podCreationTimestamp="2026-01-29 16:46:20 +0000 UTC" firstStartedPulling="2026-01-29 16:46:23.860158092 +0000 UTC m=+1466.768877404" lastFinishedPulling="2026-01-29 16:46:33.503175806 +0000 UTC m=+1476.411895118" observedRunningTime="2026-01-29 16:46:34.017467376 +0000 UTC m=+1476.926186658" watchObservedRunningTime="2026-01-29 16:46:34.01915911 +0000 UTC m=+1476.927878382" Jan 29 16:46:34 crc kubenswrapper[4886]: I0129 16:46:34.036571 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vnttp" podStartSLOduration=3.429299607 podStartE2EDuration="12.036552157s" podCreationTimestamp="2026-01-29 16:46:22 +0000 UTC" firstStartedPulling="2026-01-29 16:46:24.870618946 +0000 UTC m=+1467.779338218" lastFinishedPulling="2026-01-29 16:46:33.477871486 +0000 UTC m=+1476.386590768" observedRunningTime="2026-01-29 16:46:34.035563261 +0000 UTC m=+1476.944282533" watchObservedRunningTime="2026-01-29 16:46:34.036552157 +0000 UTC m=+1476.945271429" Jan 29 16:46:34 crc kubenswrapper[4886]: I0129 16:46:34.991287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bdhs" event={"ID":"80e49770-fa31-4780-a5ac-38a6bc1221a9","Type":"ContainerStarted","Data":"ec7b6330f582c97b42a3f4b7b50704b44b590e0a9732dc553abfaf3dade38a3f"} Jan 29 16:46:35 crc kubenswrapper[4886]: I0129 16:46:35.050489 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:35 crc kubenswrapper[4886]: I0129 16:46:35.050544 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:36 crc kubenswrapper[4886]: I0129 16:46:36.096559 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-52bfx" podUID="87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca" containerName="registry-server" probeResult="failure" output=< Jan 29 16:46:36 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 16:46:36 crc kubenswrapper[4886]: > Jan 29 16:46:41 crc kubenswrapper[4886]: I0129 16:46:41.255669 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:41 crc kubenswrapper[4886]: I0129 16:46:41.256181 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:41 crc kubenswrapper[4886]: I0129 16:46:41.325849 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:41 crc kubenswrapper[4886]: I0129 16:46:41.355973 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6bdhs" podStartSLOduration=10.337538849 podStartE2EDuration="18.355947382s" podCreationTimestamp="2026-01-29 16:46:23 +0000 UTC" firstStartedPulling="2026-01-29 16:46:25.8830051 +0000 UTC m=+1468.791724392" lastFinishedPulling="2026-01-29 16:46:33.901413653 +0000 UTC m=+1476.810132925" observedRunningTime="2026-01-29 16:46:35.013719954 +0000 UTC m=+1477.922439236" watchObservedRunningTime="2026-01-29 16:46:41.355947382 +0000 UTC m=+1484.264666684" Jan 29 16:46:42 crc kubenswrapper[4886]: I0129 16:46:42.108568 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ws2lm" Jan 29 16:46:42 crc kubenswrapper[4886]: I0129 16:46:42.625278 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:42 crc kubenswrapper[4886]: I0129 16:46:42.625368 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:42 crc kubenswrapper[4886]: I0129 16:46:42.686438 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:43 crc kubenswrapper[4886]: I0129 16:46:43.105783 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vnttp" Jan 29 16:46:43 crc kubenswrapper[4886]: I0129 16:46:43.616987 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:43 crc kubenswrapper[4886]: I0129 16:46:43.617048 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:43 crc kubenswrapper[4886]: I0129 16:46:43.660432 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:44 crc kubenswrapper[4886]: I0129 16:46:44.114790 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6bdhs" Jan 29 16:46:45 crc kubenswrapper[4886]: I0129 16:46:45.098201 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:46:45 crc kubenswrapper[4886]: I0129 16:46:45.151501 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-52bfx" Jan 29 16:47:29 crc kubenswrapper[4886]: I0129 16:47:29.660957 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:29 crc kubenswrapper[4886]: I0129 16:47:29.661414 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:59 crc kubenswrapper[4886]: I0129 16:47:59.660608 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:59 crc kubenswrapper[4886]: I0129 16:47:59.661308 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:48:02 crc kubenswrapper[4886]: I0129 16:48:02.899290 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lqx2"] Jan 29 16:48:02 crc kubenswrapper[4886]: I0129 16:48:02.901220 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:02 crc kubenswrapper[4886]: I0129 16:48:02.911565 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lqx2"] Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.063499 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpfkc\" (UniqueName: \"kubernetes.io/projected/860fb30c-4c3d-4f6f-95ff-1de487069087-kube-api-access-bpfkc\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.063561 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-catalog-content\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.063787 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-utilities\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.165418 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpfkc\" (UniqueName: \"kubernetes.io/projected/860fb30c-4c3d-4f6f-95ff-1de487069087-kube-api-access-bpfkc\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.165510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-catalog-content\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.165608 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-utilities\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.166209 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-utilities\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.166965 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-catalog-content\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.189949 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpfkc\" (UniqueName: \"kubernetes.io/projected/860fb30c-4c3d-4f6f-95ff-1de487069087-kube-api-access-bpfkc\") pod \"certified-operators-8lqx2\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.228705 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.505086 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lqx2"] Jan 29 16:48:03 crc kubenswrapper[4886]: I0129 16:48:03.704250 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lqx2" event={"ID":"860fb30c-4c3d-4f6f-95ff-1de487069087","Type":"ContainerStarted","Data":"4a87461a09c78133699165864f57ffc889764f0fa2a316800d5d0c489c5bd1b0"} Jan 29 16:48:04 crc kubenswrapper[4886]: I0129 16:48:04.717645 4886 generic.go:334] "Generic (PLEG): container finished" podID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerID="c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7" exitCode=0 Jan 29 16:48:04 crc kubenswrapper[4886]: I0129 16:48:04.717870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lqx2" event={"ID":"860fb30c-4c3d-4f6f-95ff-1de487069087","Type":"ContainerDied","Data":"c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7"} Jan 29 16:48:04 crc kubenswrapper[4886]: I0129 16:48:04.721520 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:48:06 crc kubenswrapper[4886]: I0129 16:48:06.738207 4886 generic.go:334] "Generic (PLEG): container finished" podID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerID="237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e" exitCode=0 Jan 29 16:48:06 crc kubenswrapper[4886]: I0129 16:48:06.738265 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lqx2" event={"ID":"860fb30c-4c3d-4f6f-95ff-1de487069087","Type":"ContainerDied","Data":"237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e"} Jan 29 16:48:08 crc kubenswrapper[4886]: I0129 16:48:08.758791 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lqx2" event={"ID":"860fb30c-4c3d-4f6f-95ff-1de487069087","Type":"ContainerStarted","Data":"25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd"} Jan 29 16:48:08 crc kubenswrapper[4886]: I0129 16:48:08.781834 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lqx2" podStartSLOduration=3.755151723 podStartE2EDuration="6.781811368s" podCreationTimestamp="2026-01-29 16:48:02 +0000 UTC" firstStartedPulling="2026-01-29 16:48:04.721173477 +0000 UTC m=+1567.629892759" lastFinishedPulling="2026-01-29 16:48:07.747833092 +0000 UTC m=+1570.656552404" observedRunningTime="2026-01-29 16:48:08.778383491 +0000 UTC m=+1571.687102763" watchObservedRunningTime="2026-01-29 16:48:08.781811368 +0000 UTC m=+1571.690530650" Jan 29 16:48:13 crc kubenswrapper[4886]: I0129 16:48:13.229778 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:13 crc kubenswrapper[4886]: I0129 16:48:13.230459 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:13 crc kubenswrapper[4886]: I0129 16:48:13.294800 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:13 crc kubenswrapper[4886]: I0129 16:48:13.848465 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:13 crc kubenswrapper[4886]: I0129 16:48:13.919937 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lqx2"] Jan 29 16:48:15 crc kubenswrapper[4886]: I0129 16:48:15.817673 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8lqx2" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="registry-server" containerID="cri-o://25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd" gracePeriod=2 Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.375644 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.507927 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpfkc\" (UniqueName: \"kubernetes.io/projected/860fb30c-4c3d-4f6f-95ff-1de487069087-kube-api-access-bpfkc\") pod \"860fb30c-4c3d-4f6f-95ff-1de487069087\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.508100 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-utilities\") pod \"860fb30c-4c3d-4f6f-95ff-1de487069087\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.508147 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-catalog-content\") pod \"860fb30c-4c3d-4f6f-95ff-1de487069087\" (UID: \"860fb30c-4c3d-4f6f-95ff-1de487069087\") " Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.509647 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-utilities" (OuterVolumeSpecName: "utilities") pod "860fb30c-4c3d-4f6f-95ff-1de487069087" (UID: "860fb30c-4c3d-4f6f-95ff-1de487069087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.517197 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860fb30c-4c3d-4f6f-95ff-1de487069087-kube-api-access-bpfkc" (OuterVolumeSpecName: "kube-api-access-bpfkc") pod "860fb30c-4c3d-4f6f-95ff-1de487069087" (UID: "860fb30c-4c3d-4f6f-95ff-1de487069087"). InnerVolumeSpecName "kube-api-access-bpfkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.559077 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "860fb30c-4c3d-4f6f-95ff-1de487069087" (UID: "860fb30c-4c3d-4f6f-95ff-1de487069087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.609607 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpfkc\" (UniqueName: \"kubernetes.io/projected/860fb30c-4c3d-4f6f-95ff-1de487069087-kube-api-access-bpfkc\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.609648 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.609663 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860fb30c-4c3d-4f6f-95ff-1de487069087-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.826926 4886 generic.go:334] "Generic (PLEG): container finished" podID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerID="25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd" exitCode=0 Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.826967 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lqx2" event={"ID":"860fb30c-4c3d-4f6f-95ff-1de487069087","Type":"ContainerDied","Data":"25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd"} Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.826992 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lqx2" event={"ID":"860fb30c-4c3d-4f6f-95ff-1de487069087","Type":"ContainerDied","Data":"4a87461a09c78133699165864f57ffc889764f0fa2a316800d5d0c489c5bd1b0"} Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.827021 4886 scope.go:117] "RemoveContainer" containerID="25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.827075 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lqx2" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.857788 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lqx2"] Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.863447 4886 scope.go:117] "RemoveContainer" containerID="237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.868085 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8lqx2"] Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.882316 4886 scope.go:117] "RemoveContainer" containerID="c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.923932 4886 scope.go:117] "RemoveContainer" containerID="25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd" Jan 29 16:48:16 crc kubenswrapper[4886]: E0129 16:48:16.924450 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd\": container with ID starting with 25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd not found: ID does not exist" containerID="25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.924483 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd"} err="failed to get container status \"25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd\": rpc error: code = NotFound desc = could not find container \"25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd\": container with ID starting with 25be302db85a3629c40f39797bdcb5e4d80c59b44b547a44db6482c33891e0dd not found: ID does not exist" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.924507 4886 scope.go:117] "RemoveContainer" containerID="237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e" Jan 29 16:48:16 crc kubenswrapper[4886]: E0129 16:48:16.924929 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e\": container with ID starting with 237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e not found: ID does not exist" containerID="237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.924961 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e"} err="failed to get container status \"237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e\": rpc error: code = NotFound desc = could not find container \"237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e\": container with ID starting with 237729db2181ba06bb5b9a2990ef2432c906b9314a10c99ac22c691a2275eb5e not found: ID does not exist" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.924982 4886 scope.go:117] "RemoveContainer" containerID="c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7" Jan 29 16:48:16 crc kubenswrapper[4886]: E0129 16:48:16.925348 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7\": container with ID starting with c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7 not found: ID does not exist" containerID="c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7" Jan 29 16:48:16 crc kubenswrapper[4886]: I0129 16:48:16.925451 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7"} err="failed to get container status \"c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7\": rpc error: code = NotFound desc = could not find container \"c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7\": container with ID starting with c5afd1cb7edd41e37a61e7964e9a3936fe9580078d8088abebe1e915156bc1d7 not found: ID does not exist" Jan 29 16:48:18 crc kubenswrapper[4886]: I0129 16:48:18.627543 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" path="/var/lib/kubelet/pods/860fb30c-4c3d-4f6f-95ff-1de487069087/volumes" Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.661077 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.661733 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.661796 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.663219 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.663410 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" gracePeriod=600 Jan 29 16:48:29 crc kubenswrapper[4886]: E0129 16:48:29.789849 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.963825 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" exitCode=0 Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.963941 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463"} Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.964035 4886 scope.go:117] "RemoveContainer" containerID="e07342110c4b02787cb4723c63fa377397be4b574d1be34193ab1f7b4cebac54" Jan 29 16:48:29 crc kubenswrapper[4886]: I0129 16:48:29.964936 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:48:29 crc kubenswrapper[4886]: E0129 16:48:29.965469 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:48:42 crc kubenswrapper[4886]: I0129 16:48:42.614967 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:48:42 crc kubenswrapper[4886]: E0129 16:48:42.616120 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.226438 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rb649"] Jan 29 16:48:46 crc kubenswrapper[4886]: E0129 16:48:46.227084 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="extract-content" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.227102 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="extract-content" Jan 29 16:48:46 crc kubenswrapper[4886]: E0129 16:48:46.227130 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="registry-server" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.227138 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="registry-server" Jan 29 16:48:46 crc kubenswrapper[4886]: E0129 16:48:46.227158 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="extract-utilities" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.227168 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="extract-utilities" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.227354 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="860fb30c-4c3d-4f6f-95ff-1de487069087" containerName="registry-server" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.228589 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.250094 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb649"] Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.340405 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-catalog-content\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.340475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jb6\" (UniqueName: \"kubernetes.io/projected/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-kube-api-access-t6jb6\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.340739 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-utilities\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.441999 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-utilities\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.442130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-catalog-content\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.442171 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jb6\" (UniqueName: \"kubernetes.io/projected/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-kube-api-access-t6jb6\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.442571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-utilities\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.443205 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-catalog-content\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.463692 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jb6\" (UniqueName: \"kubernetes.io/projected/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-kube-api-access-t6jb6\") pod \"redhat-operators-rb649\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:46 crc kubenswrapper[4886]: I0129 16:48:46.548316 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:47 crc kubenswrapper[4886]: I0129 16:48:47.012502 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb649"] Jan 29 16:48:47 crc kubenswrapper[4886]: I0129 16:48:47.093375 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerStarted","Data":"d857247c97994f556a4a4a300a7f0839fd8e562211f2e6ae427fe0ad1d0d3d48"} Jan 29 16:48:48 crc kubenswrapper[4886]: I0129 16:48:48.103800 4886 generic.go:334] "Generic (PLEG): container finished" podID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerID="cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823" exitCode=0 Jan 29 16:48:48 crc kubenswrapper[4886]: I0129 16:48:48.103858 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerDied","Data":"cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823"} Jan 29 16:48:49 crc kubenswrapper[4886]: I0129 16:48:49.112830 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerStarted","Data":"624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928"} Jan 29 16:48:50 crc kubenswrapper[4886]: I0129 16:48:50.124602 4886 generic.go:334] "Generic (PLEG): container finished" podID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerID="624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928" exitCode=0 Jan 29 16:48:50 crc kubenswrapper[4886]: I0129 16:48:50.124658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerDied","Data":"624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928"} Jan 29 16:48:51 crc kubenswrapper[4886]: I0129 16:48:51.135254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerStarted","Data":"52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c"} Jan 29 16:48:51 crc kubenswrapper[4886]: I0129 16:48:51.157761 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rb649" podStartSLOduration=2.405310719 podStartE2EDuration="5.157739289s" podCreationTimestamp="2026-01-29 16:48:46 +0000 UTC" firstStartedPulling="2026-01-29 16:48:48.105643246 +0000 UTC m=+1611.014362568" lastFinishedPulling="2026-01-29 16:48:50.858071856 +0000 UTC m=+1613.766791138" observedRunningTime="2026-01-29 16:48:51.152785229 +0000 UTC m=+1614.061504501" watchObservedRunningTime="2026-01-29 16:48:51.157739289 +0000 UTC m=+1614.066458601" Jan 29 16:48:56 crc kubenswrapper[4886]: I0129 16:48:56.549398 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:56 crc kubenswrapper[4886]: I0129 16:48:56.550494 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:48:57 crc kubenswrapper[4886]: I0129 16:48:57.601362 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rb649" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="registry-server" probeResult="failure" output=< Jan 29 16:48:57 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 16:48:57 crc kubenswrapper[4886]: > Jan 29 16:48:57 crc kubenswrapper[4886]: I0129 16:48:57.615737 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:48:57 crc kubenswrapper[4886]: E0129 16:48:57.616156 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:48:59 crc kubenswrapper[4886]: I0129 16:48:59.297145 4886 scope.go:117] "RemoveContainer" containerID="5d883c5a30d8f4bbb039e6aaa651b8e09e6b2a8064244a25c33a761d3d8863ae" Jan 29 16:48:59 crc kubenswrapper[4886]: I0129 16:48:59.337564 4886 scope.go:117] "RemoveContainer" containerID="f97710e37d132101bc18cdd88c6b7f51c7d65099d23a9fcf1887c1bba9f84a3e" Jan 29 16:48:59 crc kubenswrapper[4886]: I0129 16:48:59.366178 4886 scope.go:117] "RemoveContainer" containerID="33b121937df6965f1e7c4b97eec963e1caa986d708bab7e6baf54e700c6b9a38" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.789372 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b9w7q"] Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.793474 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.807005 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9w7q"] Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.826136 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-utilities\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.826217 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-catalog-content\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.826320 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cltjl\" (UniqueName: \"kubernetes.io/projected/c468bcf2-7186-4ef4-9770-70d4776e478d-kube-api-access-cltjl\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.928383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cltjl\" (UniqueName: \"kubernetes.io/projected/c468bcf2-7186-4ef4-9770-70d4776e478d-kube-api-access-cltjl\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.928559 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-utilities\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.928606 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-catalog-content\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.929218 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-utilities\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.929296 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-catalog-content\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:01 crc kubenswrapper[4886]: I0129 16:49:01.967786 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cltjl\" (UniqueName: \"kubernetes.io/projected/c468bcf2-7186-4ef4-9770-70d4776e478d-kube-api-access-cltjl\") pod \"community-operators-b9w7q\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:02 crc kubenswrapper[4886]: I0129 16:49:02.126622 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:02 crc kubenswrapper[4886]: W0129 16:49:02.626129 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc468bcf2_7186_4ef4_9770_70d4776e478d.slice/crio-799625d5a7995ef45825f53d082e2fd90ee42fdf7e125df25160449117b36de2 WatchSource:0}: Error finding container 799625d5a7995ef45825f53d082e2fd90ee42fdf7e125df25160449117b36de2: Status 404 returned error can't find the container with id 799625d5a7995ef45825f53d082e2fd90ee42fdf7e125df25160449117b36de2 Jan 29 16:49:02 crc kubenswrapper[4886]: I0129 16:49:02.632203 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9w7q"] Jan 29 16:49:03 crc kubenswrapper[4886]: I0129 16:49:03.226356 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerStarted","Data":"799625d5a7995ef45825f53d082e2fd90ee42fdf7e125df25160449117b36de2"} Jan 29 16:49:04 crc kubenswrapper[4886]: I0129 16:49:04.241407 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerStarted","Data":"082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393"} Jan 29 16:49:05 crc kubenswrapper[4886]: I0129 16:49:05.257124 4886 generic.go:334] "Generic (PLEG): container finished" podID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerID="082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393" exitCode=0 Jan 29 16:49:05 crc kubenswrapper[4886]: I0129 16:49:05.257206 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerDied","Data":"082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393"} Jan 29 16:49:06 crc kubenswrapper[4886]: I0129 16:49:06.606244 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:49:06 crc kubenswrapper[4886]: I0129 16:49:06.677576 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:49:07 crc kubenswrapper[4886]: I0129 16:49:07.278475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerStarted","Data":"ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8"} Jan 29 16:49:07 crc kubenswrapper[4886]: I0129 16:49:07.726866 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rb649"] Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.291011 4886 generic.go:334] "Generic (PLEG): container finished" podID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerID="ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8" exitCode=0 Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.291063 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerDied","Data":"ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8"} Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.291386 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rb649" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="registry-server" containerID="cri-o://52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c" gracePeriod=2 Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.824224 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.858606 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-utilities\") pod \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.858725 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-catalog-content\") pod \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.858828 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6jb6\" (UniqueName: \"kubernetes.io/projected/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-kube-api-access-t6jb6\") pod \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\" (UID: \"fc1d1fd3-36c5-4b47-bd32-230dc4453e57\") " Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.864830 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-utilities" (OuterVolumeSpecName: "utilities") pod "fc1d1fd3-36c5-4b47-bd32-230dc4453e57" (UID: "fc1d1fd3-36c5-4b47-bd32-230dc4453e57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.872926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-kube-api-access-t6jb6" (OuterVolumeSpecName: "kube-api-access-t6jb6") pod "fc1d1fd3-36c5-4b47-bd32-230dc4453e57" (UID: "fc1d1fd3-36c5-4b47-bd32-230dc4453e57"). InnerVolumeSpecName "kube-api-access-t6jb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.960813 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6jb6\" (UniqueName: \"kubernetes.io/projected/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-kube-api-access-t6jb6\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.960863 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:08 crc kubenswrapper[4886]: I0129 16:49:08.997809 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc1d1fd3-36c5-4b47-bd32-230dc4453e57" (UID: "fc1d1fd3-36c5-4b47-bd32-230dc4453e57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.061862 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1d1fd3-36c5-4b47-bd32-230dc4453e57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.305937 4886 generic.go:334] "Generic (PLEG): container finished" podID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerID="52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c" exitCode=0 Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.306055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerDied","Data":"52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c"} Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.306090 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb649" event={"ID":"fc1d1fd3-36c5-4b47-bd32-230dc4453e57","Type":"ContainerDied","Data":"d857247c97994f556a4a4a300a7f0839fd8e562211f2e6ae427fe0ad1d0d3d48"} Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.306116 4886 scope.go:117] "RemoveContainer" containerID="52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.306993 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb649" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.309427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerStarted","Data":"59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77"} Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.326529 4886 scope.go:117] "RemoveContainer" containerID="624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.339625 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b9w7q" podStartSLOduration=5.895748684 podStartE2EDuration="8.33959473s" podCreationTimestamp="2026-01-29 16:49:01 +0000 UTC" firstStartedPulling="2026-01-29 16:49:06.269700144 +0000 UTC m=+1629.178419456" lastFinishedPulling="2026-01-29 16:49:08.71354622 +0000 UTC m=+1631.622265502" observedRunningTime="2026-01-29 16:49:09.332673564 +0000 UTC m=+1632.241392886" watchObservedRunningTime="2026-01-29 16:49:09.33959473 +0000 UTC m=+1632.248314012" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.357900 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rb649"] Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.367652 4886 scope.go:117] "RemoveContainer" containerID="cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.372058 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rb649"] Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.389586 4886 scope.go:117] "RemoveContainer" containerID="52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c" Jan 29 16:49:09 crc kubenswrapper[4886]: E0129 16:49:09.389823 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c\": container with ID starting with 52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c not found: ID does not exist" containerID="52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.389877 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c"} err="failed to get container status \"52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c\": rpc error: code = NotFound desc = could not find container \"52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c\": container with ID starting with 52e7407b95f1e3d37b25c372e80a9917554036fb5d36e571babdf608c6ab8b2c not found: ID does not exist" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.389915 4886 scope.go:117] "RemoveContainer" containerID="624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928" Jan 29 16:49:09 crc kubenswrapper[4886]: E0129 16:49:09.390176 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928\": container with ID starting with 624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928 not found: ID does not exist" containerID="624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.390214 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928"} err="failed to get container status \"624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928\": rpc error: code = NotFound desc = could not find container \"624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928\": container with ID starting with 624f5139f1a2c50f96cd70304d37713a103819d2077781d21599f155b38e0928 not found: ID does not exist" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.390231 4886 scope.go:117] "RemoveContainer" containerID="cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823" Jan 29 16:49:09 crc kubenswrapper[4886]: E0129 16:49:09.390594 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823\": container with ID starting with cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823 not found: ID does not exist" containerID="cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823" Jan 29 16:49:09 crc kubenswrapper[4886]: I0129 16:49:09.390634 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823"} err="failed to get container status \"cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823\": rpc error: code = NotFound desc = could not find container \"cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823\": container with ID starting with cc46e50228c504a5ce69248aa0c8fc04aed2d8481106f72d24ed44ddb5847823 not found: ID does not exist" Jan 29 16:49:10 crc kubenswrapper[4886]: I0129 16:49:10.632630 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" path="/var/lib/kubelet/pods/fc1d1fd3-36c5-4b47-bd32-230dc4453e57/volumes" Jan 29 16:49:11 crc kubenswrapper[4886]: I0129 16:49:11.616010 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:49:11 crc kubenswrapper[4886]: E0129 16:49:11.616844 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:49:12 crc kubenswrapper[4886]: I0129 16:49:12.127115 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:12 crc kubenswrapper[4886]: I0129 16:49:12.127652 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:12 crc kubenswrapper[4886]: I0129 16:49:12.201893 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:13 crc kubenswrapper[4886]: I0129 16:49:13.434610 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:13 crc kubenswrapper[4886]: I0129 16:49:13.718042 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9w7q"] Jan 29 16:49:15 crc kubenswrapper[4886]: I0129 16:49:15.377016 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b9w7q" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="registry-server" containerID="cri-o://59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77" gracePeriod=2 Jan 29 16:49:15 crc kubenswrapper[4886]: I0129 16:49:15.878473 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.005373 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-utilities\") pod \"c468bcf2-7186-4ef4-9770-70d4776e478d\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.005446 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cltjl\" (UniqueName: \"kubernetes.io/projected/c468bcf2-7186-4ef4-9770-70d4776e478d-kube-api-access-cltjl\") pod \"c468bcf2-7186-4ef4-9770-70d4776e478d\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.005530 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-catalog-content\") pod \"c468bcf2-7186-4ef4-9770-70d4776e478d\" (UID: \"c468bcf2-7186-4ef4-9770-70d4776e478d\") " Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.006672 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-utilities" (OuterVolumeSpecName: "utilities") pod "c468bcf2-7186-4ef4-9770-70d4776e478d" (UID: "c468bcf2-7186-4ef4-9770-70d4776e478d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.015564 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c468bcf2-7186-4ef4-9770-70d4776e478d-kube-api-access-cltjl" (OuterVolumeSpecName: "kube-api-access-cltjl") pod "c468bcf2-7186-4ef4-9770-70d4776e478d" (UID: "c468bcf2-7186-4ef4-9770-70d4776e478d"). InnerVolumeSpecName "kube-api-access-cltjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.105409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c468bcf2-7186-4ef4-9770-70d4776e478d" (UID: "c468bcf2-7186-4ef4-9770-70d4776e478d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.106964 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.106996 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cltjl\" (UniqueName: \"kubernetes.io/projected/c468bcf2-7186-4ef4-9770-70d4776e478d-kube-api-access-cltjl\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.107009 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c468bcf2-7186-4ef4-9770-70d4776e478d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.388093 4886 generic.go:334] "Generic (PLEG): container finished" podID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerID="59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77" exitCode=0 Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.388136 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerDied","Data":"59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77"} Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.388171 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9w7q" event={"ID":"c468bcf2-7186-4ef4-9770-70d4776e478d","Type":"ContainerDied","Data":"799625d5a7995ef45825f53d082e2fd90ee42fdf7e125df25160449117b36de2"} Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.388190 4886 scope.go:117] "RemoveContainer" containerID="59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.388186 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9w7q" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.413319 4886 scope.go:117] "RemoveContainer" containerID="ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.426423 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9w7q"] Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.431122 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b9w7q"] Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.458770 4886 scope.go:117] "RemoveContainer" containerID="082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.478902 4886 scope.go:117] "RemoveContainer" containerID="59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77" Jan 29 16:49:16 crc kubenswrapper[4886]: E0129 16:49:16.479264 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77\": container with ID starting with 59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77 not found: ID does not exist" containerID="59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.479308 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77"} err="failed to get container status \"59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77\": rpc error: code = NotFound desc = could not find container \"59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77\": container with ID starting with 59cc412193b3130b39141a3f157a2a8998aa61ddecddcf310dee6b51ec2ffe77 not found: ID does not exist" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.479377 4886 scope.go:117] "RemoveContainer" containerID="ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8" Jan 29 16:49:16 crc kubenswrapper[4886]: E0129 16:49:16.479681 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8\": container with ID starting with ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8 not found: ID does not exist" containerID="ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.479718 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8"} err="failed to get container status \"ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8\": rpc error: code = NotFound desc = could not find container \"ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8\": container with ID starting with ac6f611b16c6f5d7856add64806d578e7d1ff0562407cf21ec433ec91447a1e8 not found: ID does not exist" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.479783 4886 scope.go:117] "RemoveContainer" containerID="082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393" Jan 29 16:49:16 crc kubenswrapper[4886]: E0129 16:49:16.480021 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393\": container with ID starting with 082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393 not found: ID does not exist" containerID="082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.480049 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393"} err="failed to get container status \"082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393\": rpc error: code = NotFound desc = could not find container \"082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393\": container with ID starting with 082c04bad5b5edb39719d59fe983024d447a170e7b3cb883e5e9c1dec4786393 not found: ID does not exist" Jan 29 16:49:16 crc kubenswrapper[4886]: I0129 16:49:16.625425 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" path="/var/lib/kubelet/pods/c468bcf2-7186-4ef4-9770-70d4776e478d/volumes" Jan 29 16:49:23 crc kubenswrapper[4886]: I0129 16:49:23.615720 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:49:23 crc kubenswrapper[4886]: E0129 16:49:23.616315 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:49:37 crc kubenswrapper[4886]: I0129 16:49:37.614921 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:49:37 crc kubenswrapper[4886]: E0129 16:49:37.615957 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:49:52 crc kubenswrapper[4886]: I0129 16:49:52.615037 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:49:52 crc kubenswrapper[4886]: E0129 16:49:52.616537 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:49:59 crc kubenswrapper[4886]: I0129 16:49:59.429162 4886 scope.go:117] "RemoveContainer" containerID="c3183e31247098ddd97f7b27ad0dbf70d02daf691b6fbd6a4595181aba6a0ae9" Jan 29 16:49:59 crc kubenswrapper[4886]: I0129 16:49:59.464509 4886 scope.go:117] "RemoveContainer" containerID="0e60e37f19cf29954ac9598d39f3e907b0a8fd7df0f8e5321feafa568cea256e" Jan 29 16:49:59 crc kubenswrapper[4886]: I0129 16:49:59.496945 4886 scope.go:117] "RemoveContainer" containerID="82c9ec7fc7823b99a453ab6558f3f2d190f9fc013e02e7613db77aca6c9d421f" Jan 29 16:49:59 crc kubenswrapper[4886]: I0129 16:49:59.526589 4886 scope.go:117] "RemoveContainer" containerID="e4cccb4d486fe60f0edfb4f7f715ab8d92c12f9f9f4a1cfe4e00c4adc5c34b51" Jan 29 16:49:59 crc kubenswrapper[4886]: I0129 16:49:59.561114 4886 scope.go:117] "RemoveContainer" containerID="a37b6266b19c1ce3a441dff00e8cafa9669109c4ad6f2385f4502687f4af460a" Jan 29 16:49:59 crc kubenswrapper[4886]: I0129 16:49:59.581731 4886 scope.go:117] "RemoveContainer" containerID="8d122cad021ce2744d255a9dc7ff90dfde7fd82fdce7705c91c1c86d943ebbab" Jan 29 16:50:07 crc kubenswrapper[4886]: I0129 16:50:07.615875 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:50:07 crc kubenswrapper[4886]: E0129 16:50:07.617068 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:50:22 crc kubenswrapper[4886]: I0129 16:50:22.615813 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:50:22 crc kubenswrapper[4886]: E0129 16:50:22.616824 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:50:34 crc kubenswrapper[4886]: I0129 16:50:34.615595 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:50:34 crc kubenswrapper[4886]: E0129 16:50:34.616515 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:50:46 crc kubenswrapper[4886]: I0129 16:50:46.615707 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:50:46 crc kubenswrapper[4886]: E0129 16:50:46.616860 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:51:00 crc kubenswrapper[4886]: I0129 16:51:00.614842 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:51:00 crc kubenswrapper[4886]: E0129 16:51:00.615647 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:51:12 crc kubenswrapper[4886]: I0129 16:51:12.615042 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:51:12 crc kubenswrapper[4886]: E0129 16:51:12.616044 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:51:23 crc kubenswrapper[4886]: I0129 16:51:23.615269 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:51:23 crc kubenswrapper[4886]: E0129 16:51:23.615996 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:51:34 crc kubenswrapper[4886]: I0129 16:51:34.616008 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:51:34 crc kubenswrapper[4886]: E0129 16:51:34.617018 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:51:45 crc kubenswrapper[4886]: I0129 16:51:45.615710 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:51:45 crc kubenswrapper[4886]: E0129 16:51:45.616699 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:51:58 crc kubenswrapper[4886]: I0129 16:51:58.621855 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:51:58 crc kubenswrapper[4886]: E0129 16:51:58.622921 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:52:10 crc kubenswrapper[4886]: I0129 16:52:10.615819 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:52:10 crc kubenswrapper[4886]: E0129 16:52:10.616997 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:52:21 crc kubenswrapper[4886]: I0129 16:52:21.614897 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:52:21 crc kubenswrapper[4886]: E0129 16:52:21.615916 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:52:35 crc kubenswrapper[4886]: I0129 16:52:35.614918 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:52:35 crc kubenswrapper[4886]: E0129 16:52:35.616074 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:52:47 crc kubenswrapper[4886]: I0129 16:52:47.615745 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:52:47 crc kubenswrapper[4886]: E0129 16:52:47.616693 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:53:01 crc kubenswrapper[4886]: I0129 16:53:01.615689 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:53:01 crc kubenswrapper[4886]: E0129 16:53:01.616506 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:53:13 crc kubenswrapper[4886]: I0129 16:53:13.615600 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:53:13 crc kubenswrapper[4886]: E0129 16:53:13.616523 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:53:24 crc kubenswrapper[4886]: I0129 16:53:24.615768 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:53:24 crc kubenswrapper[4886]: E0129 16:53:24.616799 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 16:53:38 crc kubenswrapper[4886]: I0129 16:53:38.617904 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:53:39 crc kubenswrapper[4886]: I0129 16:53:39.659776 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"8ef97582eea2927ab131d16b422621b32afa666846864a223a782bc24fb0ddda"} Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.248213 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn"] Jan 29 16:54:44 crc kubenswrapper[4886]: E0129 16:54:44.249130 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="extract-content" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249147 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="extract-content" Jan 29 16:54:44 crc kubenswrapper[4886]: E0129 16:54:44.249166 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="extract-utilities" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249174 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="extract-utilities" Jan 29 16:54:44 crc kubenswrapper[4886]: E0129 16:54:44.249186 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="extract-content" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249196 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="extract-content" Jan 29 16:54:44 crc kubenswrapper[4886]: E0129 16:54:44.249211 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="extract-utilities" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249219 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="extract-utilities" Jan 29 16:54:44 crc kubenswrapper[4886]: E0129 16:54:44.249235 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="registry-server" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249242 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="registry-server" Jan 29 16:54:44 crc kubenswrapper[4886]: E0129 16:54:44.249258 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="registry-server" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249266 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="registry-server" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249450 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1d1fd3-36c5-4b47-bd32-230dc4453e57" containerName="registry-server" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.249471 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c468bcf2-7186-4ef4-9770-70d4776e478d" containerName="registry-server" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.250912 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.254007 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.255428 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn"] Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.319770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.319845 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.319933 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrrs\" (UniqueName: \"kubernetes.io/projected/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-kube-api-access-hsrrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.421477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.421569 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrrs\" (UniqueName: \"kubernetes.io/projected/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-kube-api-access-hsrrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.421689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.422177 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.422203 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.452756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrrs\" (UniqueName: \"kubernetes.io/projected/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-kube-api-access-hsrrs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:44 crc kubenswrapper[4886]: I0129 16:54:44.569062 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:45 crc kubenswrapper[4886]: I0129 16:54:45.130376 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn"] Jan 29 16:54:45 crc kubenswrapper[4886]: I0129 16:54:45.224637 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" event={"ID":"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7","Type":"ContainerStarted","Data":"31a22a4610e4bc5ef385c72ff41fe37a167bc49f9af10fc97aab59455595fd80"} Jan 29 16:54:46 crc kubenswrapper[4886]: I0129 16:54:46.234293 4886 generic.go:334] "Generic (PLEG): container finished" podID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerID="f9304c3928205846747e1e5c7f125f756ac7129c49ff039ab595e9d33a42c1cc" exitCode=0 Jan 29 16:54:46 crc kubenswrapper[4886]: I0129 16:54:46.234397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" event={"ID":"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7","Type":"ContainerDied","Data":"f9304c3928205846747e1e5c7f125f756ac7129c49ff039ab595e9d33a42c1cc"} Jan 29 16:54:46 crc kubenswrapper[4886]: I0129 16:54:46.236644 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:54:52 crc kubenswrapper[4886]: I0129 16:54:52.290135 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" event={"ID":"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7","Type":"ContainerStarted","Data":"b84010ce0c4043243efacf85a0fcfff301b23fe298fb04de818639758c93f7fb"} Jan 29 16:54:53 crc kubenswrapper[4886]: I0129 16:54:53.298765 4886 generic.go:334] "Generic (PLEG): container finished" podID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerID="b84010ce0c4043243efacf85a0fcfff301b23fe298fb04de818639758c93f7fb" exitCode=0 Jan 29 16:54:53 crc kubenswrapper[4886]: I0129 16:54:53.298834 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" event={"ID":"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7","Type":"ContainerDied","Data":"b84010ce0c4043243efacf85a0fcfff301b23fe298fb04de818639758c93f7fb"} Jan 29 16:54:54 crc kubenswrapper[4886]: I0129 16:54:54.317145 4886 generic.go:334] "Generic (PLEG): container finished" podID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerID="5f67d03a2e4c4b97cf1d3b14a5246c56dee6336601fe7d4e5a56a15ac76c14ea" exitCode=0 Jan 29 16:54:54 crc kubenswrapper[4886]: I0129 16:54:54.317757 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" event={"ID":"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7","Type":"ContainerDied","Data":"5f67d03a2e4c4b97cf1d3b14a5246c56dee6336601fe7d4e5a56a15ac76c14ea"} Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.607900 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.743361 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-bundle\") pod \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.743433 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-util\") pod \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.743471 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsrrs\" (UniqueName: \"kubernetes.io/projected/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-kube-api-access-hsrrs\") pod \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\" (UID: \"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7\") " Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.744226 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-bundle" (OuterVolumeSpecName: "bundle") pod "1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" (UID: "1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.749691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-kube-api-access-hsrrs" (OuterVolumeSpecName: "kube-api-access-hsrrs") pod "1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" (UID: "1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7"). InnerVolumeSpecName "kube-api-access-hsrrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.756961 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-util" (OuterVolumeSpecName: "util") pod "1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" (UID: "1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.845936 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.845983 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4886]: I0129 16:54:55.845999 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsrrs\" (UniqueName: \"kubernetes.io/projected/1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7-kube-api-access-hsrrs\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:56 crc kubenswrapper[4886]: I0129 16:54:56.339461 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" event={"ID":"1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7","Type":"ContainerDied","Data":"31a22a4610e4bc5ef385c72ff41fe37a167bc49f9af10fc97aab59455595fd80"} Jan 29 16:54:56 crc kubenswrapper[4886]: I0129 16:54:56.339841 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a22a4610e4bc5ef385c72ff41fe37a167bc49f9af10fc97aab59455595fd80" Jan 29 16:54:56 crc kubenswrapper[4886]: I0129 16:54:56.339566 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.868431 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xn5zh"] Jan 29 16:55:00 crc kubenswrapper[4886]: E0129 16:55:00.869077 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="pull" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.869093 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="pull" Jan 29 16:55:00 crc kubenswrapper[4886]: E0129 16:55:00.869121 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="extract" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.869131 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="extract" Jan 29 16:55:00 crc kubenswrapper[4886]: E0129 16:55:00.869150 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="util" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.869158 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="util" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.869317 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7" containerName="extract" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.870042 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.874249 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.874688 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.875034 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vhlcq" Jan 29 16:55:00 crc kubenswrapper[4886]: I0129 16:55:00.877958 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xn5zh"] Jan 29 16:55:01 crc kubenswrapper[4886]: I0129 16:55:01.027674 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbc9\" (UniqueName: \"kubernetes.io/projected/64313301-3779-4923-949f-b8de5c30b5bb-kube-api-access-zmbc9\") pod \"nmstate-operator-646758c888-xn5zh\" (UID: \"64313301-3779-4923-949f-b8de5c30b5bb\") " pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" Jan 29 16:55:01 crc kubenswrapper[4886]: I0129 16:55:01.129652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmbc9\" (UniqueName: \"kubernetes.io/projected/64313301-3779-4923-949f-b8de5c30b5bb-kube-api-access-zmbc9\") pod \"nmstate-operator-646758c888-xn5zh\" (UID: \"64313301-3779-4923-949f-b8de5c30b5bb\") " pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" Jan 29 16:55:01 crc kubenswrapper[4886]: I0129 16:55:01.158474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmbc9\" (UniqueName: \"kubernetes.io/projected/64313301-3779-4923-949f-b8de5c30b5bb-kube-api-access-zmbc9\") pod \"nmstate-operator-646758c888-xn5zh\" (UID: \"64313301-3779-4923-949f-b8de5c30b5bb\") " pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" Jan 29 16:55:01 crc kubenswrapper[4886]: I0129 16:55:01.186607 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" Jan 29 16:55:01 crc kubenswrapper[4886]: I0129 16:55:01.589493 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xn5zh"] Jan 29 16:55:01 crc kubenswrapper[4886]: W0129 16:55:01.594089 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64313301_3779_4923_949f_b8de5c30b5bb.slice/crio-5e7235fed240676037b6e2c21d2766320f11b421eb1b437ae533a71b82db565e WatchSource:0}: Error finding container 5e7235fed240676037b6e2c21d2766320f11b421eb1b437ae533a71b82db565e: Status 404 returned error can't find the container with id 5e7235fed240676037b6e2c21d2766320f11b421eb1b437ae533a71b82db565e Jan 29 16:55:02 crc kubenswrapper[4886]: I0129 16:55:02.384474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" event={"ID":"64313301-3779-4923-949f-b8de5c30b5bb","Type":"ContainerStarted","Data":"5e7235fed240676037b6e2c21d2766320f11b421eb1b437ae533a71b82db565e"} Jan 29 16:55:05 crc kubenswrapper[4886]: I0129 16:55:05.415901 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" event={"ID":"64313301-3779-4923-949f-b8de5c30b5bb","Type":"ContainerStarted","Data":"0c88d0777aef9b64a944eb8b10ddd89037fa93f3b62d43586509a0c2743e4d27"} Jan 29 16:55:05 crc kubenswrapper[4886]: I0129 16:55:05.454549 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-xn5zh" podStartSLOduration=2.759281174 podStartE2EDuration="5.454521s" podCreationTimestamp="2026-01-29 16:55:00 +0000 UTC" firstStartedPulling="2026-01-29 16:55:01.596229828 +0000 UTC m=+1984.504949090" lastFinishedPulling="2026-01-29 16:55:04.291469604 +0000 UTC m=+1987.200188916" observedRunningTime="2026-01-29 16:55:05.442800378 +0000 UTC m=+1988.351519690" watchObservedRunningTime="2026-01-29 16:55:05.454521 +0000 UTC m=+1988.363240302" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.438940 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ntx9m"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.440735 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.449855 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.450710 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.451180 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vr2f5" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.452725 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.463601 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ntx9m"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.482671 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.494706 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9lh4n"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.495669 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.526952 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkg4\" (UniqueName: \"kubernetes.io/projected/c42903b0-c0d4-4c39-bed3-3c9d083e753d-kube-api-access-gdkg4\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.527020 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c42903b0-c0d4-4c39-bed3-3c9d083e753d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.527054 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwx9\" (UniqueName: \"kubernetes.io/projected/515c481a-e563-41c3-b5ff-d5957faf5217-kube-api-access-4wwx9\") pod \"nmstate-metrics-54757c584b-ntx9m\" (UID: \"515c481a-e563-41c3-b5ff-d5957faf5217\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.586748 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.587652 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.590045 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.590074 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.590045 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cszgj" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.604123 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629154 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-ovs-socket\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkg4\" (UniqueName: \"kubernetes.io/projected/c42903b0-c0d4-4c39-bed3-3c9d083e753d-kube-api-access-gdkg4\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c42903b0-c0d4-4c39-bed3-3c9d083e753d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629269 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwx9\" (UniqueName: \"kubernetes.io/projected/515c481a-e563-41c3-b5ff-d5957faf5217-kube-api-access-4wwx9\") pod \"nmstate-metrics-54757c584b-ntx9m\" (UID: \"515c481a-e563-41c3-b5ff-d5957faf5217\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-nmstate-lock\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629307 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52p5\" (UniqueName: \"kubernetes.io/projected/848b9df5-c882-4017-b1ad-6ac496646a76-kube-api-access-v52p5\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.629342 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-dbus-socket\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: E0129 16:55:06.629758 4886 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 16:55:06 crc kubenswrapper[4886]: E0129 16:55:06.629802 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c42903b0-c0d4-4c39-bed3-3c9d083e753d-tls-key-pair podName:c42903b0-c0d4-4c39-bed3-3c9d083e753d nodeName:}" failed. No retries permitted until 2026-01-29 16:55:07.129788239 +0000 UTC m=+1990.038507511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c42903b0-c0d4-4c39-bed3-3c9d083e753d-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-mv5wp" (UID: "c42903b0-c0d4-4c39-bed3-3c9d083e753d") : secret "openshift-nmstate-webhook" not found Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.648764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkg4\" (UniqueName: \"kubernetes.io/projected/c42903b0-c0d4-4c39-bed3-3c9d083e753d-kube-api-access-gdkg4\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.648788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwx9\" (UniqueName: \"kubernetes.io/projected/515c481a-e563-41c3-b5ff-d5957faf5217-kube-api-access-4wwx9\") pod \"nmstate-metrics-54757c584b-ntx9m\" (UID: \"515c481a-e563-41c3-b5ff-d5957faf5217\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731211 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2814fca3-5ea5-4b77-aad5-0308881c88bb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731299 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2814fca3-5ea5-4b77-aad5-0308881c88bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731364 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-nmstate-lock\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731388 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v52p5\" (UniqueName: \"kubernetes.io/projected/848b9df5-c882-4017-b1ad-6ac496646a76-kube-api-access-v52p5\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-dbus-socket\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-nmstate-lock\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731796 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-dbus-socket\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdz7\" (UniqueName: \"kubernetes.io/projected/2814fca3-5ea5-4b77-aad5-0308881c88bb-kube-api-access-hbdz7\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.731940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-ovs-socket\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.732035 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/848b9df5-c882-4017-b1ad-6ac496646a76-ovs-socket\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.755231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v52p5\" (UniqueName: \"kubernetes.io/projected/848b9df5-c882-4017-b1ad-6ac496646a76-kube-api-access-v52p5\") pod \"nmstate-handler-9lh4n\" (UID: \"848b9df5-c882-4017-b1ad-6ac496646a76\") " pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.766414 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.779093 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d44f9f6d-wvkcd"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.780076 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.796279 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d44f9f6d-wvkcd"] Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.815077 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.848411 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdz7\" (UniqueName: \"kubernetes.io/projected/2814fca3-5ea5-4b77-aad5-0308881c88bb-kube-api-access-hbdz7\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.848520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2814fca3-5ea5-4b77-aad5-0308881c88bb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.848585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2814fca3-5ea5-4b77-aad5-0308881c88bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: E0129 16:55:06.848823 4886 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 16:55:06 crc kubenswrapper[4886]: E0129 16:55:06.848881 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2814fca3-5ea5-4b77-aad5-0308881c88bb-plugin-serving-cert podName:2814fca3-5ea5-4b77-aad5-0308881c88bb nodeName:}" failed. No retries permitted until 2026-01-29 16:55:07.348860193 +0000 UTC m=+1990.257579465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2814fca3-5ea5-4b77-aad5-0308881c88bb-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-d4tp4" (UID: "2814fca3-5ea5-4b77-aad5-0308881c88bb") : secret "plugin-serving-cert" not found Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.850273 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2814fca3-5ea5-4b77-aad5-0308881c88bb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: W0129 16:55:06.850817 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod848b9df5_c882_4017_b1ad_6ac496646a76.slice/crio-f2c5b96b2c7501c982fdee9d1d0fabac6399f29a42f0cea1334609a6d68f31b8 WatchSource:0}: Error finding container f2c5b96b2c7501c982fdee9d1d0fabac6399f29a42f0cea1334609a6d68f31b8: Status 404 returned error can't find the container with id f2c5b96b2c7501c982fdee9d1d0fabac6399f29a42f0cea1334609a6d68f31b8 Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.871457 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdz7\" (UniqueName: \"kubernetes.io/projected/2814fca3-5ea5-4b77-aad5-0308881c88bb-kube-api-access-hbdz7\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.951804 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-oauth-config\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.951882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt776\" (UniqueName: \"kubernetes.io/projected/d7eb0acf-dfc4-4c24-8231-bfae5b620653-kube-api-access-vt776\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.951958 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-serving-cert\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.951976 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-oauth-serving-cert\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.952726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-config\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.952777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-trusted-ca-bundle\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:06 crc kubenswrapper[4886]: I0129 16:55:06.952825 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-service-ca\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.053823 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-serving-cert\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.053869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-oauth-serving-cert\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.053896 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-config\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.053943 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-trusted-ca-bundle\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.053974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-service-ca\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.054020 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-oauth-config\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.054050 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt776\" (UniqueName: \"kubernetes.io/projected/d7eb0acf-dfc4-4c24-8231-bfae5b620653-kube-api-access-vt776\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.054945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-oauth-serving-cert\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.055025 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-config\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.055279 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-service-ca\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.055289 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-trusted-ca-bundle\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.059129 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-serving-cert\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.059791 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-oauth-config\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.072076 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt776\" (UniqueName: \"kubernetes.io/projected/d7eb0acf-dfc4-4c24-8231-bfae5b620653-kube-api-access-vt776\") pod \"console-7d44f9f6d-wvkcd\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.155415 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c42903b0-c0d4-4c39-bed3-3c9d083e753d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.161955 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c42903b0-c0d4-4c39-bed3-3c9d083e753d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mv5wp\" (UID: \"c42903b0-c0d4-4c39-bed3-3c9d083e753d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.171002 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.259230 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ntx9m"] Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.357975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2814fca3-5ea5-4b77-aad5-0308881c88bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.363935 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2814fca3-5ea5-4b77-aad5-0308881c88bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d4tp4\" (UID: \"2814fca3-5ea5-4b77-aad5-0308881c88bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.376790 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.431539 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" event={"ID":"515c481a-e563-41c3-b5ff-d5957faf5217","Type":"ContainerStarted","Data":"e5c35961f61b0ca142eff5912053441fa3277d22ff63b267234ff963c21cb123"} Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.432528 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9lh4n" event={"ID":"848b9df5-c882-4017-b1ad-6ac496646a76","Type":"ContainerStarted","Data":"f2c5b96b2c7501c982fdee9d1d0fabac6399f29a42f0cea1334609a6d68f31b8"} Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.504977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.616908 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d44f9f6d-wvkcd"] Jan 29 16:55:07 crc kubenswrapper[4886]: W0129 16:55:07.636080 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7eb0acf_dfc4_4c24_8231_bfae5b620653.slice/crio-2dde3f8777f56361bbc961c320b3499545e524fdb56d2e7e1762b3c549f1e8ca WatchSource:0}: Error finding container 2dde3f8777f56361bbc961c320b3499545e524fdb56d2e7e1762b3c549f1e8ca: Status 404 returned error can't find the container with id 2dde3f8777f56361bbc961c320b3499545e524fdb56d2e7e1762b3c549f1e8ca Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.781952 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp"] Jan 29 16:55:07 crc kubenswrapper[4886]: W0129 16:55:07.789926 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc42903b0_c0d4_4c39_bed3_3c9d083e753d.slice/crio-0e30b42164a13454d5d21ae69e07d6266d571f80251f6d10abf5e6f5aebabbb6 WatchSource:0}: Error finding container 0e30b42164a13454d5d21ae69e07d6266d571f80251f6d10abf5e6f5aebabbb6: Status 404 returned error can't find the container with id 0e30b42164a13454d5d21ae69e07d6266d571f80251f6d10abf5e6f5aebabbb6 Jan 29 16:55:07 crc kubenswrapper[4886]: I0129 16:55:07.997177 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4"] Jan 29 16:55:07 crc kubenswrapper[4886]: W0129 16:55:07.999847 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2814fca3_5ea5_4b77_aad5_0308881c88bb.slice/crio-049994e5706296f07a69bbc94ba72048dae5fb4de71dcf90e17e5808b2460a14 WatchSource:0}: Error finding container 049994e5706296f07a69bbc94ba72048dae5fb4de71dcf90e17e5808b2460a14: Status 404 returned error can't find the container with id 049994e5706296f07a69bbc94ba72048dae5fb4de71dcf90e17e5808b2460a14 Jan 29 16:55:08 crc kubenswrapper[4886]: I0129 16:55:08.444133 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" event={"ID":"c42903b0-c0d4-4c39-bed3-3c9d083e753d","Type":"ContainerStarted","Data":"0e30b42164a13454d5d21ae69e07d6266d571f80251f6d10abf5e6f5aebabbb6"} Jan 29 16:55:08 crc kubenswrapper[4886]: I0129 16:55:08.446560 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d44f9f6d-wvkcd" event={"ID":"d7eb0acf-dfc4-4c24-8231-bfae5b620653","Type":"ContainerStarted","Data":"83d754bde6259c4ef4756a1b0a86efc202f6d81cccfa70e563b1ad9cae41b68f"} Jan 29 16:55:08 crc kubenswrapper[4886]: I0129 16:55:08.446659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d44f9f6d-wvkcd" event={"ID":"d7eb0acf-dfc4-4c24-8231-bfae5b620653","Type":"ContainerStarted","Data":"2dde3f8777f56361bbc961c320b3499545e524fdb56d2e7e1762b3c549f1e8ca"} Jan 29 16:55:08 crc kubenswrapper[4886]: I0129 16:55:08.450060 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" event={"ID":"2814fca3-5ea5-4b77-aad5-0308881c88bb","Type":"ContainerStarted","Data":"049994e5706296f07a69bbc94ba72048dae5fb4de71dcf90e17e5808b2460a14"} Jan 29 16:55:08 crc kubenswrapper[4886]: I0129 16:55:08.479402 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d44f9f6d-wvkcd" podStartSLOduration=2.47937693 podStartE2EDuration="2.47937693s" podCreationTimestamp="2026-01-29 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:08.474055834 +0000 UTC m=+1991.382775126" watchObservedRunningTime="2026-01-29 16:55:08.47937693 +0000 UTC m=+1991.388096232" Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.480945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" event={"ID":"c42903b0-c0d4-4c39-bed3-3c9d083e753d","Type":"ContainerStarted","Data":"dd9b27566a3e9b114a8a1aa3238466a492e42e9371d79541dc76fd2dc3448c5b"} Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.481703 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.483397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" event={"ID":"515c481a-e563-41c3-b5ff-d5957faf5217","Type":"ContainerStarted","Data":"c4a932209da16152e09d8640c43a6fdc4ec5c4b4650ffd8b919c9dffacd5926c"} Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.485737 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9lh4n" event={"ID":"848b9df5-c882-4017-b1ad-6ac496646a76","Type":"ContainerStarted","Data":"975c6cd9ca3f769059b929b0357a188bbb30200e72ab2d272a5f623c49997894"} Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.485946 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.506023 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" podStartSLOduration=2.293318841 podStartE2EDuration="4.506008612s" podCreationTimestamp="2026-01-29 16:55:06 +0000 UTC" firstStartedPulling="2026-01-29 16:55:07.791626371 +0000 UTC m=+1990.700345663" lastFinishedPulling="2026-01-29 16:55:10.004316162 +0000 UTC m=+1992.913035434" observedRunningTime="2026-01-29 16:55:10.504678365 +0000 UTC m=+1993.413397637" watchObservedRunningTime="2026-01-29 16:55:10.506008612 +0000 UTC m=+1993.414727884" Jan 29 16:55:10 crc kubenswrapper[4886]: I0129 16:55:10.526040 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9lh4n" podStartSLOduration=1.446406159 podStartE2EDuration="4.52602132s" podCreationTimestamp="2026-01-29 16:55:06 +0000 UTC" firstStartedPulling="2026-01-29 16:55:06.879862192 +0000 UTC m=+1989.788581464" lastFinishedPulling="2026-01-29 16:55:09.959477293 +0000 UTC m=+1992.868196625" observedRunningTime="2026-01-29 16:55:10.520850049 +0000 UTC m=+1993.429569341" watchObservedRunningTime="2026-01-29 16:55:10.52602132 +0000 UTC m=+1993.434740592" Jan 29 16:55:11 crc kubenswrapper[4886]: I0129 16:55:11.495348 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" event={"ID":"2814fca3-5ea5-4b77-aad5-0308881c88bb","Type":"ContainerStarted","Data":"485dc32f331852b42eca3bac4a6fb624e25cbce299256c1ef555e1e33c7a90d4"} Jan 29 16:55:11 crc kubenswrapper[4886]: I0129 16:55:11.514677 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d4tp4" podStartSLOduration=2.359282699 podStartE2EDuration="5.514639545s" podCreationTimestamp="2026-01-29 16:55:06 +0000 UTC" firstStartedPulling="2026-01-29 16:55:08.001836512 +0000 UTC m=+1990.910555784" lastFinishedPulling="2026-01-29 16:55:11.157193338 +0000 UTC m=+1994.065912630" observedRunningTime="2026-01-29 16:55:11.51410197 +0000 UTC m=+1994.422821252" watchObservedRunningTime="2026-01-29 16:55:11.514639545 +0000 UTC m=+1994.423358817" Jan 29 16:55:13 crc kubenswrapper[4886]: I0129 16:55:13.513221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" event={"ID":"515c481a-e563-41c3-b5ff-d5957faf5217","Type":"ContainerStarted","Data":"e2f541822966161e051c7b85f7a4d92b179228cca604f78b7d8a1fa10421b2ef"} Jan 29 16:55:13 crc kubenswrapper[4886]: I0129 16:55:13.540657 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ntx9m" podStartSLOduration=2.487780318 podStartE2EDuration="7.540626239s" podCreationTimestamp="2026-01-29 16:55:06 +0000 UTC" firstStartedPulling="2026-01-29 16:55:07.274137428 +0000 UTC m=+1990.182856700" lastFinishedPulling="2026-01-29 16:55:12.326983339 +0000 UTC m=+1995.235702621" observedRunningTime="2026-01-29 16:55:13.534571783 +0000 UTC m=+1996.443291095" watchObservedRunningTime="2026-01-29 16:55:13.540626239 +0000 UTC m=+1996.449345541" Jan 29 16:55:16 crc kubenswrapper[4886]: I0129 16:55:16.861079 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9lh4n" Jan 29 16:55:17 crc kubenswrapper[4886]: I0129 16:55:17.172135 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:17 crc kubenswrapper[4886]: I0129 16:55:17.172221 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:17 crc kubenswrapper[4886]: I0129 16:55:17.177569 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:17 crc kubenswrapper[4886]: I0129 16:55:17.556269 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 16:55:17 crc kubenswrapper[4886]: I0129 16:55:17.637981 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-664586d6fb-g55cf"] Jan 29 16:55:27 crc kubenswrapper[4886]: I0129 16:55:27.383630 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mv5wp" Jan 29 16:55:42 crc kubenswrapper[4886]: I0129 16:55:42.695543 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-664586d6fb-g55cf" podUID="42357e7c-de03-4b8b-80f5-f946411c67f7" containerName="console" containerID="cri-o://6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38" gracePeriod=15 Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.102667 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-664586d6fb-g55cf_42357e7c-de03-4b8b-80f5-f946411c67f7/console/0.log" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.103054 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182434 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-oauth-config\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182553 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-console-config\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182599 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-trusted-ca-bundle\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182671 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-service-ca\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182706 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-oauth-serving-cert\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182788 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln452\" (UniqueName: \"kubernetes.io/projected/42357e7c-de03-4b8b-80f5-f946411c67f7-kube-api-access-ln452\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.182898 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-serving-cert\") pod \"42357e7c-de03-4b8b-80f5-f946411c67f7\" (UID: \"42357e7c-de03-4b8b-80f5-f946411c67f7\") " Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.183617 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.183630 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-service-ca" (OuterVolumeSpecName: "service-ca") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.183644 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.183703 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-console-config" (OuterVolumeSpecName: "console-config") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.184302 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.184335 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.184344 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.184352 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42357e7c-de03-4b8b-80f5-f946411c67f7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.188890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42357e7c-de03-4b8b-80f5-f946411c67f7-kube-api-access-ln452" (OuterVolumeSpecName: "kube-api-access-ln452") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "kube-api-access-ln452". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.189459 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.190146 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "42357e7c-de03-4b8b-80f5-f946411c67f7" (UID: "42357e7c-de03-4b8b-80f5-f946411c67f7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.286369 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.286724 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42357e7c-de03-4b8b-80f5-f946411c67f7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.286736 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln452\" (UniqueName: \"kubernetes.io/projected/42357e7c-de03-4b8b-80f5-f946411c67f7-kube-api-access-ln452\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.781785 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-664586d6fb-g55cf_42357e7c-de03-4b8b-80f5-f946411c67f7/console/0.log" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.781834 4886 generic.go:334] "Generic (PLEG): container finished" podID="42357e7c-de03-4b8b-80f5-f946411c67f7" containerID="6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38" exitCode=2 Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.781871 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-664586d6fb-g55cf" event={"ID":"42357e7c-de03-4b8b-80f5-f946411c67f7","Type":"ContainerDied","Data":"6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38"} Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.781903 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-664586d6fb-g55cf" event={"ID":"42357e7c-de03-4b8b-80f5-f946411c67f7","Type":"ContainerDied","Data":"4c6fe087595c24e70608f508c9599d4ead9e60d5c503746f12585384b13bc295"} Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.781925 4886 scope.go:117] "RemoveContainer" containerID="6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.782039 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-664586d6fb-g55cf" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.810914 4886 scope.go:117] "RemoveContainer" containerID="6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38" Jan 29 16:55:43 crc kubenswrapper[4886]: E0129 16:55:43.811293 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38\": container with ID starting with 6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38 not found: ID does not exist" containerID="6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.811355 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38"} err="failed to get container status \"6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38\": rpc error: code = NotFound desc = could not find container \"6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38\": container with ID starting with 6019dfcf6dda95ddc80718ca451b48d8dede9d785bf016b5b0c27dcf7bc93e38 not found: ID does not exist" Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.815696 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-664586d6fb-g55cf"] Jan 29 16:55:43 crc kubenswrapper[4886]: I0129 16:55:43.821615 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-664586d6fb-g55cf"] Jan 29 16:55:44 crc kubenswrapper[4886]: I0129 16:55:44.624965 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42357e7c-de03-4b8b-80f5-f946411c67f7" path="/var/lib/kubelet/pods/42357e7c-de03-4b8b-80f5-f946411c67f7/volumes" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.241288 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5"] Jan 29 16:55:58 crc kubenswrapper[4886]: E0129 16:55:58.242217 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42357e7c-de03-4b8b-80f5-f946411c67f7" containerName="console" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.242237 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="42357e7c-de03-4b8b-80f5-f946411c67f7" containerName="console" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.242499 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="42357e7c-de03-4b8b-80f5-f946411c67f7" containerName="console" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.244144 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.256675 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.258850 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5"] Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.359235 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.359540 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dq4\" (UniqueName: \"kubernetes.io/projected/aa613edd-15e0-466f-8739-ab30f6d61801-kube-api-access-z8dq4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.359699 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.461617 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.461734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dq4\" (UniqueName: \"kubernetes.io/projected/aa613edd-15e0-466f-8739-ab30f6d61801-kube-api-access-z8dq4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.461801 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.462266 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.462274 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.485709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dq4\" (UniqueName: \"kubernetes.io/projected/aa613edd-15e0-466f-8739-ab30f6d61801-kube-api-access-z8dq4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.588552 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:55:58 crc kubenswrapper[4886]: I0129 16:55:58.596534 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:55:59 crc kubenswrapper[4886]: I0129 16:55:59.066291 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5"] Jan 29 16:55:59 crc kubenswrapper[4886]: W0129 16:55:59.071934 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa613edd_15e0_466f_8739_ab30f6d61801.slice/crio-8be2bbeba35f6a3a828f1cf712135895f74da1f19506f9a662f89c6ac9ba1865 WatchSource:0}: Error finding container 8be2bbeba35f6a3a828f1cf712135895f74da1f19506f9a662f89c6ac9ba1865: Status 404 returned error can't find the container with id 8be2bbeba35f6a3a828f1cf712135895f74da1f19506f9a662f89c6ac9ba1865 Jan 29 16:55:59 crc kubenswrapper[4886]: I0129 16:55:59.661211 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:55:59 crc kubenswrapper[4886]: I0129 16:55:59.661596 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:55:59 crc kubenswrapper[4886]: I0129 16:55:59.934793 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa613edd-15e0-466f-8739-ab30f6d61801" containerID="b3ba887bb48636a071a891e42be18b55f6a9e2fbc6239ddf3528ab05267a3a5f" exitCode=0 Jan 29 16:55:59 crc kubenswrapper[4886]: I0129 16:55:59.934842 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" event={"ID":"aa613edd-15e0-466f-8739-ab30f6d61801","Type":"ContainerDied","Data":"b3ba887bb48636a071a891e42be18b55f6a9e2fbc6239ddf3528ab05267a3a5f"} Jan 29 16:55:59 crc kubenswrapper[4886]: I0129 16:55:59.934869 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" event={"ID":"aa613edd-15e0-466f-8739-ab30f6d61801","Type":"ContainerStarted","Data":"8be2bbeba35f6a3a828f1cf712135895f74da1f19506f9a662f89c6ac9ba1865"} Jan 29 16:56:00 crc kubenswrapper[4886]: E0129 16:56:00.069340 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87" Jan 29 16:56:00 crc kubenswrapper[4886]: E0129 16:56:00.069510 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8dq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_openshift-marketplace(aa613edd-15e0-466f-8739-ab30f6d61801): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:00 crc kubenswrapper[4886]: E0129 16:56:00.070711 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:56:00 crc kubenswrapper[4886]: E0129 16:56:00.941948 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87\\\"\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:56:15 crc kubenswrapper[4886]: E0129 16:56:15.763649 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87" Jan 29 16:56:15 crc kubenswrapper[4886]: E0129 16:56:15.764367 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8dq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_openshift-marketplace(aa613edd-15e0-466f-8739-ab30f6d61801): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:15 crc kubenswrapper[4886]: E0129 16:56:15.765673 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:56:21 crc kubenswrapper[4886]: I0129 16:56:21.797460 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4fv5"] Jan 29 16:56:21 crc kubenswrapper[4886]: I0129 16:56:21.805622 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:21 crc kubenswrapper[4886]: I0129 16:56:21.809123 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4fv5"] Jan 29 16:56:21 crc kubenswrapper[4886]: I0129 16:56:21.991585 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfqs\" (UniqueName: \"kubernetes.io/projected/3e333f39-f93b-4066-8e9f-4bd27e4d3672-kube-api-access-6kfqs\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:21 crc kubenswrapper[4886]: I0129 16:56:21.991732 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-catalog-content\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:21 crc kubenswrapper[4886]: I0129 16:56:21.991918 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-utilities\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.095150 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-utilities\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.095393 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfqs\" (UniqueName: \"kubernetes.io/projected/3e333f39-f93b-4066-8e9f-4bd27e4d3672-kube-api-access-6kfqs\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.095479 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-catalog-content\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.095776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-utilities\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.095997 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-catalog-content\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.119351 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfqs\" (UniqueName: \"kubernetes.io/projected/3e333f39-f93b-4066-8e9f-4bd27e4d3672-kube-api-access-6kfqs\") pod \"redhat-marketplace-m4fv5\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.129275 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:56:22 crc kubenswrapper[4886]: I0129 16:56:22.531490 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4fv5"] Jan 29 16:56:23 crc kubenswrapper[4886]: I0129 16:56:23.119885 4886 generic.go:334] "Generic (PLEG): container finished" podID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerID="54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026" exitCode=0 Jan 29 16:56:23 crc kubenswrapper[4886]: I0129 16:56:23.119934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4fv5" event={"ID":"3e333f39-f93b-4066-8e9f-4bd27e4d3672","Type":"ContainerDied","Data":"54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026"} Jan 29 16:56:23 crc kubenswrapper[4886]: I0129 16:56:23.120253 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4fv5" event={"ID":"3e333f39-f93b-4066-8e9f-4bd27e4d3672","Type":"ContainerStarted","Data":"721f687c812954ac213bf098f41dc7b5630da2bcf0b09ba3c2bdd27881939e63"} Jan 29 16:56:23 crc kubenswrapper[4886]: E0129 16:56:23.253657 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:56:23 crc kubenswrapper[4886]: E0129 16:56:23.253844 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kfqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m4fv5_openshift-marketplace(3e333f39-f93b-4066-8e9f-4bd27e4d3672): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:23 crc kubenswrapper[4886]: E0129 16:56:23.255063 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:56:24 crc kubenswrapper[4886]: E0129 16:56:24.132013 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:56:29 crc kubenswrapper[4886]: E0129 16:56:29.617881 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87\\\"\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:56:29 crc kubenswrapper[4886]: I0129 16:56:29.660867 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:56:29 crc kubenswrapper[4886]: I0129 16:56:29.660924 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:56:35 crc kubenswrapper[4886]: E0129 16:56:35.747924 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:56:35 crc kubenswrapper[4886]: E0129 16:56:35.749147 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kfqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m4fv5_openshift-marketplace(3e333f39-f93b-4066-8e9f-4bd27e4d3672): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:35 crc kubenswrapper[4886]: E0129 16:56:35.750514 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:56:44 crc kubenswrapper[4886]: E0129 16:56:44.746030 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87" Jan 29 16:56:44 crc kubenswrapper[4886]: E0129 16:56:44.746667 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8dq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_openshift-marketplace(aa613edd-15e0-466f-8739-ab30f6d61801): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:44 crc kubenswrapper[4886]: E0129 16:56:44.747863 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:56:47 crc kubenswrapper[4886]: E0129 16:56:47.616897 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:56:58 crc kubenswrapper[4886]: E0129 16:56:58.619880 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87\\\"\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:56:59 crc kubenswrapper[4886]: I0129 16:56:59.661179 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:56:59 crc kubenswrapper[4886]: I0129 16:56:59.662077 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:56:59 crc kubenswrapper[4886]: I0129 16:56:59.662214 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 16:56:59 crc kubenswrapper[4886]: I0129 16:56:59.662938 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ef97582eea2927ab131d16b422621b32afa666846864a223a782bc24fb0ddda"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:56:59 crc kubenswrapper[4886]: I0129 16:56:59.663122 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://8ef97582eea2927ab131d16b422621b32afa666846864a223a782bc24fb0ddda" gracePeriod=600 Jan 29 16:56:59 crc kubenswrapper[4886]: E0129 16:56:59.746848 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:56:59 crc kubenswrapper[4886]: E0129 16:56:59.747000 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kfqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m4fv5_openshift-marketplace(3e333f39-f93b-4066-8e9f-4bd27e4d3672): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:59 crc kubenswrapper[4886]: E0129 16:56:59.748440 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:57:00 crc kubenswrapper[4886]: I0129 16:57:00.452581 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="8ef97582eea2927ab131d16b422621b32afa666846864a223a782bc24fb0ddda" exitCode=0 Jan 29 16:57:00 crc kubenswrapper[4886]: I0129 16:57:00.452664 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"8ef97582eea2927ab131d16b422621b32afa666846864a223a782bc24fb0ddda"} Jan 29 16:57:00 crc kubenswrapper[4886]: I0129 16:57:00.452711 4886 scope.go:117] "RemoveContainer" containerID="705ca471a878082d4a93a73d2095863766a13245174606f1f47cdefc4bd2e463" Jan 29 16:57:01 crc kubenswrapper[4886]: I0129 16:57:01.463934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc"} Jan 29 16:57:11 crc kubenswrapper[4886]: E0129 16:57:11.619284 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87\\\"\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:57:13 crc kubenswrapper[4886]: E0129 16:57:13.618708 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:57:23 crc kubenswrapper[4886]: E0129 16:57:23.617564 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:43205585b4bfcac18bfdf918280b62fe382a0d7926e6fdbea5edd703fa57cd87\\\"\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" Jan 29 16:57:26 crc kubenswrapper[4886]: E0129 16:57:26.617799 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:57:39 crc kubenswrapper[4886]: I0129 16:57:39.809724 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa613edd-15e0-466f-8739-ab30f6d61801" containerID="ca5d820f84d33a6787485746a40ce0ca702d98726bdfd28f0b841d12759cdee5" exitCode=0 Jan 29 16:57:39 crc kubenswrapper[4886]: I0129 16:57:39.810012 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" event={"ID":"aa613edd-15e0-466f-8739-ab30f6d61801","Type":"ContainerDied","Data":"ca5d820f84d33a6787485746a40ce0ca702d98726bdfd28f0b841d12759cdee5"} Jan 29 16:57:40 crc kubenswrapper[4886]: I0129 16:57:40.833630 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa613edd-15e0-466f-8739-ab30f6d61801" containerID="be763c4ea500c4509b35f741338737b9173afba1ba0428d16b5db6b158cc301f" exitCode=0 Jan 29 16:57:40 crc kubenswrapper[4886]: I0129 16:57:40.833747 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" event={"ID":"aa613edd-15e0-466f-8739-ab30f6d61801","Type":"ContainerDied","Data":"be763c4ea500c4509b35f741338737b9173afba1ba0428d16b5db6b158cc301f"} Jan 29 16:57:41 crc kubenswrapper[4886]: E0129 16:57:41.741113 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:57:41 crc kubenswrapper[4886]: E0129 16:57:41.741721 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kfqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m4fv5_openshift-marketplace(3e333f39-f93b-4066-8e9f-4bd27e4d3672): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:41 crc kubenswrapper[4886]: E0129 16:57:41.743043 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.203860 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.327316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-bundle\") pod \"aa613edd-15e0-466f-8739-ab30f6d61801\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.327415 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8dq4\" (UniqueName: \"kubernetes.io/projected/aa613edd-15e0-466f-8739-ab30f6d61801-kube-api-access-z8dq4\") pod \"aa613edd-15e0-466f-8739-ab30f6d61801\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.327431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-util\") pod \"aa613edd-15e0-466f-8739-ab30f6d61801\" (UID: \"aa613edd-15e0-466f-8739-ab30f6d61801\") " Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.328750 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-bundle" (OuterVolumeSpecName: "bundle") pod "aa613edd-15e0-466f-8739-ab30f6d61801" (UID: "aa613edd-15e0-466f-8739-ab30f6d61801"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.333620 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa613edd-15e0-466f-8739-ab30f6d61801-kube-api-access-z8dq4" (OuterVolumeSpecName: "kube-api-access-z8dq4") pod "aa613edd-15e0-466f-8739-ab30f6d61801" (UID: "aa613edd-15e0-466f-8739-ab30f6d61801"). InnerVolumeSpecName "kube-api-access-z8dq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.338928 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-util" (OuterVolumeSpecName: "util") pod "aa613edd-15e0-466f-8739-ab30f6d61801" (UID: "aa613edd-15e0-466f-8739-ab30f6d61801"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.429593 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.429633 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8dq4\" (UniqueName: \"kubernetes.io/projected/aa613edd-15e0-466f-8739-ab30f6d61801-kube-api-access-z8dq4\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.429642 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa613edd-15e0-466f-8739-ab30f6d61801-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.851435 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" event={"ID":"aa613edd-15e0-466f-8739-ab30f6d61801","Type":"ContainerDied","Data":"8be2bbeba35f6a3a828f1cf712135895f74da1f19506f9a662f89c6ac9ba1865"} Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.851493 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be2bbeba35f6a3a828f1cf712135895f74da1f19506f9a662f89c6ac9ba1865" Jan 29 16:57:42 crc kubenswrapper[4886]: I0129 16:57:42.851521 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.066247 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k"] Jan 29 16:57:53 crc kubenswrapper[4886]: E0129 16:57:53.067127 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="pull" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.067141 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="pull" Jan 29 16:57:53 crc kubenswrapper[4886]: E0129 16:57:53.067162 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="extract" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.067168 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="extract" Jan 29 16:57:53 crc kubenswrapper[4886]: E0129 16:57:53.067183 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="util" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.067189 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="util" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.067353 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa613edd-15e0-466f-8739-ab30f6d61801" containerName="extract" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.067899 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.070690 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.072084 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.072173 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.072219 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fp46d" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.073130 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.111802 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k"] Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.213755 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc960811-7f19-4248-8d44-e3ffcb98d650-apiservice-cert\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.213884 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdp59\" (UniqueName: \"kubernetes.io/projected/dc960811-7f19-4248-8d44-e3ffcb98d650-kube-api-access-gdp59\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.213950 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc960811-7f19-4248-8d44-e3ffcb98d650-webhook-cert\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.315925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdp59\" (UniqueName: \"kubernetes.io/projected/dc960811-7f19-4248-8d44-e3ffcb98d650-kube-api-access-gdp59\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.316064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc960811-7f19-4248-8d44-e3ffcb98d650-webhook-cert\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.316128 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc960811-7f19-4248-8d44-e3ffcb98d650-apiservice-cert\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.323928 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc960811-7f19-4248-8d44-e3ffcb98d650-webhook-cert\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.332534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdp59\" (UniqueName: \"kubernetes.io/projected/dc960811-7f19-4248-8d44-e3ffcb98d650-kube-api-access-gdp59\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.332902 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc960811-7f19-4248-8d44-e3ffcb98d650-apiservice-cert\") pod \"metallb-operator-controller-manager-77cfddbbb9-wbb7k\" (UID: \"dc960811-7f19-4248-8d44-e3ffcb98d650\") " pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.401285 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt"] Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.402952 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.406631 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.406865 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.407320 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kthzn" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.409312 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt"] Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.412838 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.519045 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88b1900-1763-4d6c-9b3a-62598ab57eda-apiservice-cert\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.519132 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88b1900-1763-4d6c-9b3a-62598ab57eda-webhook-cert\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.519479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lxq\" (UniqueName: \"kubernetes.io/projected/a88b1900-1763-4d6c-9b3a-62598ab57eda-kube-api-access-h4lxq\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.625318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88b1900-1763-4d6c-9b3a-62598ab57eda-apiservice-cert\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.625413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88b1900-1763-4d6c-9b3a-62598ab57eda-webhook-cert\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.625538 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4lxq\" (UniqueName: \"kubernetes.io/projected/a88b1900-1763-4d6c-9b3a-62598ab57eda-kube-api-access-h4lxq\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.648387 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4lxq\" (UniqueName: \"kubernetes.io/projected/a88b1900-1763-4d6c-9b3a-62598ab57eda-kube-api-access-h4lxq\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.653981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88b1900-1763-4d6c-9b3a-62598ab57eda-apiservice-cert\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.663006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88b1900-1763-4d6c-9b3a-62598ab57eda-webhook-cert\") pod \"metallb-operator-webhook-server-96d4668dd-sb2zt\" (UID: \"a88b1900-1763-4d6c-9b3a-62598ab57eda\") " pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:53 crc kubenswrapper[4886]: I0129 16:57:53.720060 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:57:54 crc kubenswrapper[4886]: I0129 16:57:54.050823 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt"] Jan 29 16:57:54 crc kubenswrapper[4886]: W0129 16:57:54.056364 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda88b1900_1763_4d6c_9b3a_62598ab57eda.slice/crio-2b112fa27eef35a6793dab8a7c5b4bb512aac25a538c8c5bb4daa66864da7e80 WatchSource:0}: Error finding container 2b112fa27eef35a6793dab8a7c5b4bb512aac25a538c8c5bb4daa66864da7e80: Status 404 returned error can't find the container with id 2b112fa27eef35a6793dab8a7c5b4bb512aac25a538c8c5bb4daa66864da7e80 Jan 29 16:57:54 crc kubenswrapper[4886]: I0129 16:57:54.065100 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k"] Jan 29 16:57:54 crc kubenswrapper[4886]: W0129 16:57:54.071953 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc960811_7f19_4248_8d44_e3ffcb98d650.slice/crio-64f89935ee5dfac6c771d94b60d52920c35dad896ce51393f15d74cbbeb48d5b WatchSource:0}: Error finding container 64f89935ee5dfac6c771d94b60d52920c35dad896ce51393f15d74cbbeb48d5b: Status 404 returned error can't find the container with id 64f89935ee5dfac6c771d94b60d52920c35dad896ce51393f15d74cbbeb48d5b Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.193782 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d" Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.193975 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:webhook-server,Image:registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d,Command:[/controller],Args:[--disable-cert-rotation=true --port=7472 --log-level=info --webhook-mode=onlywebhook],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7472,Protocol:TCP,HostIP:,},ContainerPort{Name:webhook-server,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:METALLB_BGP_TYPE,Value:frr,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:metallb-operator.v4.18.0-202601071645,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4lxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000730000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metallb-operator-webhook-server-96d4668dd-sb2zt_metallb-system(a88b1900-1763-4d6c-9b3a-62598ab57eda): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.195134 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" podUID="a88b1900-1763-4d6c-9b3a-62598ab57eda" Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.198491 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0" Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.198742 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0,Command:[/manager],Args:[--enable-leader-election --disable-cert-rotation=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:webhook-server,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:SPEAKER_IMAGE,Value:registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d,ValueFrom:nil,},EnvVar{Name:CONTROLLER_IMAGE,Value:registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d,ValueFrom:nil,},EnvVar{Name:FRR_IMAGE,Value:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:86800d7a823cf444db8393dd7ffa735b2e42e9120f3f869487b0a2ed6b0db73d,ValueFrom:nil,},EnvVar{Name:DEPLOY_KUBE_RBAC_PROXIES,Value:true,ValueFrom:nil,},EnvVar{Name:FRRK8S_IMAGE,Value:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,ValueFrom:nil,},EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:DEPLOY_PODMONITORS,Value:false,ValueFrom:nil,},EnvVar{Name:DEPLOY_SERVICEMONITORS,Value:true,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOK,Value:true,ValueFrom:nil,},EnvVar{Name:ENABLE_OPERATOR_WEBHOOK,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_PORT,Value:29150,ValueFrom:nil,},EnvVar{Name:HTTPS_METRICS_PORT,Value:9120,ValueFrom:nil,},EnvVar{Name:FRR_METRICS_PORT,Value:29151,ValueFrom:nil,},EnvVar{Name:FRR_HTTPS_METRICS_PORT,Value:9121,ValueFrom:nil,},EnvVar{Name:MEMBER_LIST_BIND_PORT,Value:9122,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:metallb-operator.v4.18.0-202601071645,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdp59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000730000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metallb-operator-controller-manager-77cfddbbb9-wbb7k_metallb-system(dc960811-7f19-4248-8d44-e3ffcb98d650): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.199978 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" podUID="dc960811-7f19-4248-8d44-e3ffcb98d650" Jan 29 16:57:54 crc kubenswrapper[4886]: I0129 16:57:54.938306 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" event={"ID":"dc960811-7f19-4248-8d44-e3ffcb98d650","Type":"ContainerStarted","Data":"64f89935ee5dfac6c771d94b60d52920c35dad896ce51393f15d74cbbeb48d5b"} Jan 29 16:57:54 crc kubenswrapper[4886]: I0129 16:57:54.939664 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" event={"ID":"a88b1900-1763-4d6c-9b3a-62598ab57eda","Type":"ContainerStarted","Data":"2b112fa27eef35a6793dab8a7c5b4bb512aac25a538c8c5bb4daa66864da7e80"} Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.941135 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d\\\"\"" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" podUID="a88b1900-1763-4d6c-9b3a-62598ab57eda" Jan 29 16:57:54 crc kubenswrapper[4886]: E0129 16:57:54.949865 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0\\\"\"" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" podUID="dc960811-7f19-4248-8d44-e3ffcb98d650" Jan 29 16:57:55 crc kubenswrapper[4886]: E0129 16:57:55.953093 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:dc12a5ec124aac3c8fa5d1a9c9e063b1854864dc58e0e3ed02e01bf8e5eaaae0\\\"\"" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" podUID="dc960811-7f19-4248-8d44-e3ffcb98d650" Jan 29 16:57:55 crc kubenswrapper[4886]: E0129 16:57:55.953084 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d\\\"\"" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" podUID="a88b1900-1763-4d6c-9b3a-62598ab57eda" Jan 29 16:57:56 crc kubenswrapper[4886]: E0129 16:57:56.616194 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.399657 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2qvtg"] Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.402103 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.418700 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qvtg"] Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.591726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkts7\" (UniqueName: \"kubernetes.io/projected/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-kube-api-access-mkts7\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.591847 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-catalog-content\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.591878 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-utilities\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.693482 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-catalog-content\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.693523 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-utilities\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.693620 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkts7\" (UniqueName: \"kubernetes.io/projected/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-kube-api-access-mkts7\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.694061 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-utilities\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.694080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-catalog-content\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.720717 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkts7\" (UniqueName: \"kubernetes.io/projected/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-kube-api-access-mkts7\") pod \"certified-operators-2qvtg\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:03 crc kubenswrapper[4886]: I0129 16:58:03.722461 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:04 crc kubenswrapper[4886]: I0129 16:58:04.225387 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qvtg"] Jan 29 16:58:05 crc kubenswrapper[4886]: I0129 16:58:05.021057 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerID="9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a" exitCode=0 Jan 29 16:58:05 crc kubenswrapper[4886]: I0129 16:58:05.021165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qvtg" event={"ID":"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d","Type":"ContainerDied","Data":"9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a"} Jan 29 16:58:05 crc kubenswrapper[4886]: I0129 16:58:05.021467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qvtg" event={"ID":"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d","Type":"ContainerStarted","Data":"8c993bdc28773309bb449abbfde8c5ecf5591ad63dba75097c127b6a364bd347"} Jan 29 16:58:07 crc kubenswrapper[4886]: I0129 16:58:07.037345 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerID="d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6" exitCode=0 Jan 29 16:58:07 crc kubenswrapper[4886]: I0129 16:58:07.037451 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qvtg" event={"ID":"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d","Type":"ContainerDied","Data":"d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6"} Jan 29 16:58:08 crc kubenswrapper[4886]: I0129 16:58:08.047009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qvtg" event={"ID":"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d","Type":"ContainerStarted","Data":"34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c"} Jan 29 16:58:08 crc kubenswrapper[4886]: I0129 16:58:08.078886 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2qvtg" podStartSLOduration=2.674614074 podStartE2EDuration="5.078868025s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="2026-01-29 16:58:05.022468011 +0000 UTC m=+2167.931187303" lastFinishedPulling="2026-01-29 16:58:07.426721982 +0000 UTC m=+2170.335441254" observedRunningTime="2026-01-29 16:58:08.077152716 +0000 UTC m=+2170.985871998" watchObservedRunningTime="2026-01-29 16:58:08.078868025 +0000 UTC m=+2170.987587297" Jan 29 16:58:11 crc kubenswrapper[4886]: E0129 16:58:11.208457 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:58:12 crc kubenswrapper[4886]: I0129 16:58:12.078439 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" event={"ID":"a88b1900-1763-4d6c-9b3a-62598ab57eda","Type":"ContainerStarted","Data":"383e810f953da73560beb294b0cc4e1ff2ce27a83a172970f5d32c3574834f4b"} Jan 29 16:58:12 crc kubenswrapper[4886]: I0129 16:58:12.079155 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:58:12 crc kubenswrapper[4886]: I0129 16:58:12.099721 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" podStartSLOduration=1.863078996 podStartE2EDuration="19.099700809s" podCreationTimestamp="2026-01-29 16:57:53 +0000 UTC" firstStartedPulling="2026-01-29 16:57:54.059851212 +0000 UTC m=+2156.968570484" lastFinishedPulling="2026-01-29 16:58:11.296473025 +0000 UTC m=+2174.205192297" observedRunningTime="2026-01-29 16:58:12.097477797 +0000 UTC m=+2175.006197069" watchObservedRunningTime="2026-01-29 16:58:12.099700809 +0000 UTC m=+2175.008420081" Jan 29 16:58:13 crc kubenswrapper[4886]: I0129 16:58:13.723003 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:13 crc kubenswrapper[4886]: I0129 16:58:13.723310 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:13 crc kubenswrapper[4886]: I0129 16:58:13.779845 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:14 crc kubenswrapper[4886]: I0129 16:58:14.103562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" event={"ID":"dc960811-7f19-4248-8d44-e3ffcb98d650","Type":"ContainerStarted","Data":"06b3aba9f4c6c81a562e8f7ad3f2677eb99d0965c965ef78e7a90af2bd06a456"} Jan 29 16:58:14 crc kubenswrapper[4886]: I0129 16:58:14.125051 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" podStartSLOduration=1.759393225 podStartE2EDuration="21.12503673s" podCreationTimestamp="2026-01-29 16:57:53 +0000 UTC" firstStartedPulling="2026-01-29 16:57:54.07510131 +0000 UTC m=+2156.983820582" lastFinishedPulling="2026-01-29 16:58:13.440744815 +0000 UTC m=+2176.349464087" observedRunningTime="2026-01-29 16:58:14.121761038 +0000 UTC m=+2177.030480330" watchObservedRunningTime="2026-01-29 16:58:14.12503673 +0000 UTC m=+2177.033756002" Jan 29 16:58:14 crc kubenswrapper[4886]: I0129 16:58:14.155538 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.181429 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qvtg"] Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.181685 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2qvtg" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="registry-server" containerID="cri-o://34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c" gracePeriod=2 Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.614097 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.750279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkts7\" (UniqueName: \"kubernetes.io/projected/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-kube-api-access-mkts7\") pod \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.750478 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-utilities\") pod \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.750519 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-catalog-content\") pod \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\" (UID: \"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d\") " Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.751526 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-utilities" (OuterVolumeSpecName: "utilities") pod "ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" (UID: "ae46bd6d-bdc4-4ba0-9005-feff36c3c16d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.758141 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-kube-api-access-mkts7" (OuterVolumeSpecName: "kube-api-access-mkts7") pod "ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" (UID: "ae46bd6d-bdc4-4ba0-9005-feff36c3c16d"). InnerVolumeSpecName "kube-api-access-mkts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.853185 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:16 crc kubenswrapper[4886]: I0129 16:58:16.853235 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkts7\" (UniqueName: \"kubernetes.io/projected/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-kube-api-access-mkts7\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.136694 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerID="34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c" exitCode=0 Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.136774 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qvtg" event={"ID":"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d","Type":"ContainerDied","Data":"34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c"} Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.137017 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qvtg" event={"ID":"ae46bd6d-bdc4-4ba0-9005-feff36c3c16d","Type":"ContainerDied","Data":"8c993bdc28773309bb449abbfde8c5ecf5591ad63dba75097c127b6a364bd347"} Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.137041 4886 scope.go:117] "RemoveContainer" containerID="34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.136796 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qvtg" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.180833 4886 scope.go:117] "RemoveContainer" containerID="d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.197794 4886 scope.go:117] "RemoveContainer" containerID="9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.215760 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" (UID: "ae46bd6d-bdc4-4ba0-9005-feff36c3c16d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.221598 4886 scope.go:117] "RemoveContainer" containerID="34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c" Jan 29 16:58:17 crc kubenswrapper[4886]: E0129 16:58:17.222078 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c\": container with ID starting with 34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c not found: ID does not exist" containerID="34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.222124 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c"} err="failed to get container status \"34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c\": rpc error: code = NotFound desc = could not find container \"34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c\": container with ID starting with 34bec3fb008591a00cbb8bda1d6bd98382aaf3a48fac7e2f9b7190d802e34a6c not found: ID does not exist" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.222158 4886 scope.go:117] "RemoveContainer" containerID="d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6" Jan 29 16:58:17 crc kubenswrapper[4886]: E0129 16:58:17.222732 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6\": container with ID starting with d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6 not found: ID does not exist" containerID="d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.222766 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6"} err="failed to get container status \"d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6\": rpc error: code = NotFound desc = could not find container \"d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6\": container with ID starting with d088fe64c24645b26688c050b3ff9ba12fd160541a93430d870f7b94139713f6 not found: ID does not exist" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.222803 4886 scope.go:117] "RemoveContainer" containerID="9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a" Jan 29 16:58:17 crc kubenswrapper[4886]: E0129 16:58:17.223110 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a\": container with ID starting with 9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a not found: ID does not exist" containerID="9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.223148 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a"} err="failed to get container status \"9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a\": rpc error: code = NotFound desc = could not find container \"9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a\": container with ID starting with 9f0050564609cc0eca08e69957d66f1e81d2ea75d50e8f8d88f203014bf5732a not found: ID does not exist" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.260084 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.471975 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qvtg"] Jan 29 16:58:17 crc kubenswrapper[4886]: I0129 16:58:17.478194 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2qvtg"] Jan 29 16:58:18 crc kubenswrapper[4886]: I0129 16:58:18.629771 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" path="/var/lib/kubelet/pods/ae46bd6d-bdc4-4ba0-9005-feff36c3c16d/volumes" Jan 29 16:58:23 crc kubenswrapper[4886]: I0129 16:58:23.413983 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:58:23 crc kubenswrapper[4886]: I0129 16:58:23.725101 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-96d4668dd-sb2zt" Jan 29 16:58:24 crc kubenswrapper[4886]: E0129 16:58:24.618309 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:58:37 crc kubenswrapper[4886]: E0129 16:58:37.617556 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:58:43 crc kubenswrapper[4886]: I0129 16:58:43.417030 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-77cfddbbb9-wbb7k" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.120359 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b4pt6"] Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.120831 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="extract-content" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.120847 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="extract-content" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.120879 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="registry-server" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.120885 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="registry-server" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.120898 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="extract-utilities" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.120904 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="extract-utilities" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.121038 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae46bd6d-bdc4-4ba0-9005-feff36c3c16d" containerName="registry-server" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.123458 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.126619 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.127442 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.128022 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qtxz2" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.130078 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w"] Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.131311 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.139111 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.143784 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w"] Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-conf\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189257 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-reloader\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189276 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics-certs\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189310 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg57n\" (UniqueName: \"kubernetes.io/projected/cf3feb5c-d348-4c0a-95c7-46f18db4687c-kube-api-access-hg57n\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189401 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-startup\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189416 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-sockets\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npq8r\" (UniqueName: \"kubernetes.io/projected/daa4e7b8-3078-4fd1-bb04-5185fa474080-kube-api-access-npq8r\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189450 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.189481 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.220391 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bmwgt"] Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.221760 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.225281 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dx2wk" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.225641 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.225927 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.226163 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.236618 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-tlnpb"] Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.238020 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.254037 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.261590 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-tlnpb"] Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290415 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metallb-excludel2\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-conf\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290514 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290555 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-reloader\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290657 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metrics-certs\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290785 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics-certs\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290909 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg57n\" (UniqueName: \"kubernetes.io/projected/cf3feb5c-d348-4c0a-95c7-46f18db4687c-kube-api-access-hg57n\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-conf\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.290997 4886 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291048 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-startup\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.291090 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics-certs podName:daa4e7b8-3078-4fd1-bb04-5185fa474080 nodeName:}" failed. No retries permitted until 2026-01-29 16:58:44.791068105 +0000 UTC m=+2207.699787377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics-certs") pod "frr-k8s-b4pt6" (UID: "daa4e7b8-3078-4fd1-bb04-5185fa474080") : secret "frr-k8s-certs-secret" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-sockets\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291191 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291213 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npq8r\" (UniqueName: \"kubernetes.io/projected/daa4e7b8-3078-4fd1-bb04-5185fa474080-kube-api-access-npq8r\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291276 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.290921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-reloader\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291373 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbc6\" (UniqueName: \"kubernetes.io/projected/5fe12a1b-277f-429e-a6b8-a874ec6e4918-kube-api-access-5qbc6\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.291476 4886 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291485 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-sockets\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.291649 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert podName:cf3feb5c-d348-4c0a-95c7-46f18db4687c nodeName:}" failed. No retries permitted until 2026-01-29 16:58:44.791640751 +0000 UTC m=+2207.700360023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert") pod "frr-k8s-webhook-server-7df86c4f6c-x455w" (UID: "cf3feb5c-d348-4c0a-95c7-46f18db4687c") : secret "frr-k8s-webhook-server-cert" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.291930 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.292244 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/daa4e7b8-3078-4fd1-bb04-5185fa474080-frr-startup\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.315256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg57n\" (UniqueName: \"kubernetes.io/projected/cf3feb5c-d348-4c0a-95c7-46f18db4687c-kube-api-access-hg57n\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.315987 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npq8r\" (UniqueName: \"kubernetes.io/projected/daa4e7b8-3078-4fd1-bb04-5185fa474080-kube-api-access-npq8r\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.392779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metallb-excludel2\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.392859 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.392891 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metrics-certs\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.392932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/946b39e6-3f42-4aff-a197-f29de26c175a-metrics-certs\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.392982 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/946b39e6-3f42-4aff-a197-f29de26c175a-cert\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.393072 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26txp\" (UniqueName: \"kubernetes.io/projected/946b39e6-3f42-4aff-a197-f29de26c175a-kube-api-access-26txp\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.393110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbc6\" (UniqueName: \"kubernetes.io/projected/5fe12a1b-277f-429e-a6b8-a874ec6e4918-kube-api-access-5qbc6\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.393537 4886 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.393588 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist podName:5fe12a1b-277f-429e-a6b8-a874ec6e4918 nodeName:}" failed. No retries permitted until 2026-01-29 16:58:44.893573046 +0000 UTC m=+2207.802292318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist") pod "speaker-bmwgt" (UID: "5fe12a1b-277f-429e-a6b8-a874ec6e4918") : secret "metallb-memberlist" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.393689 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metallb-excludel2\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.393750 4886 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.393779 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metrics-certs podName:5fe12a1b-277f-429e-a6b8-a874ec6e4918 nodeName:}" failed. No retries permitted until 2026-01-29 16:58:44.893770722 +0000 UTC m=+2207.802490114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metrics-certs") pod "speaker-bmwgt" (UID: "5fe12a1b-277f-429e-a6b8-a874ec6e4918") : secret "speaker-certs-secret" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.422177 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbc6\" (UniqueName: \"kubernetes.io/projected/5fe12a1b-277f-429e-a6b8-a874ec6e4918-kube-api-access-5qbc6\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.494610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26txp\" (UniqueName: \"kubernetes.io/projected/946b39e6-3f42-4aff-a197-f29de26c175a-kube-api-access-26txp\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.494759 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/946b39e6-3f42-4aff-a197-f29de26c175a-metrics-certs\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.494793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/946b39e6-3f42-4aff-a197-f29de26c175a-cert\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.496690 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.498582 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/946b39e6-3f42-4aff-a197-f29de26c175a-metrics-certs\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.508097 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/946b39e6-3f42-4aff-a197-f29de26c175a-cert\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.518505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26txp\" (UniqueName: \"kubernetes.io/projected/946b39e6-3f42-4aff-a197-f29de26c175a-kube-api-access-26txp\") pod \"controller-6968d8fdc4-tlnpb\" (UID: \"946b39e6-3f42-4aff-a197-f29de26c175a\") " pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.554041 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.804319 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.804871 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics-certs\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.808450 4886 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.808525 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert podName:cf3feb5c-d348-4c0a-95c7-46f18db4687c nodeName:}" failed. No retries permitted until 2026-01-29 16:58:45.808506648 +0000 UTC m=+2208.717225920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert") pod "frr-k8s-webhook-server-7df86c4f6c-x455w" (UID: "cf3feb5c-d348-4c0a-95c7-46f18db4687c") : secret "frr-k8s-webhook-server-cert" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.815676 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa4e7b8-3078-4fd1-bb04-5185fa474080-metrics-certs\") pod \"frr-k8s-b4pt6\" (UID: \"daa4e7b8-3078-4fd1-bb04-5185fa474080\") " pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.819450 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-tlnpb"] Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.906208 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.906269 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metrics-certs\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.906418 4886 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:58:44 crc kubenswrapper[4886]: E0129 16:58:44.906499 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist podName:5fe12a1b-277f-429e-a6b8-a874ec6e4918 nodeName:}" failed. No retries permitted until 2026-01-29 16:58:45.906478912 +0000 UTC m=+2208.815198224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist") pod "speaker-bmwgt" (UID: "5fe12a1b-277f-429e-a6b8-a874ec6e4918") : secret "metallb-memberlist" not found Jan 29 16:58:44 crc kubenswrapper[4886]: I0129 16:58:44.911300 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-metrics-certs\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.042856 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.338240 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tlnpb" event={"ID":"946b39e6-3f42-4aff-a197-f29de26c175a","Type":"ContainerStarted","Data":"90a7c096c5a920388f8b7a677acf53ff14f8d6e4ed7b994189b0d652dd1c845a"} Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.338274 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tlnpb" event={"ID":"946b39e6-3f42-4aff-a197-f29de26c175a","Type":"ContainerStarted","Data":"a46d1d43cbb11249a72b7972494c82c2ee3869c566c7845a394db3f2044bf07a"} Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.338283 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tlnpb" event={"ID":"946b39e6-3f42-4aff-a197-f29de26c175a","Type":"ContainerStarted","Data":"14035c49466ef64a1830ce88769cabcc33c33c4d4eb6cbcf988c66dc62f5e237"} Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.338305 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.339275 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"e06ea4729f480338d99865a7d8bba134df9367281d355191e7b099f5804ad529"} Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.361175 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-tlnpb" podStartSLOduration=1.361152583 podStartE2EDuration="1.361152583s" podCreationTimestamp="2026-01-29 16:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:45.355647438 +0000 UTC m=+2208.264366710" watchObservedRunningTime="2026-01-29 16:58:45.361152583 +0000 UTC m=+2208.269871855" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.821654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.827588 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf3feb5c-d348-4c0a-95c7-46f18db4687c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x455w\" (UID: \"cf3feb5c-d348-4c0a-95c7-46f18db4687c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.923792 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.932704 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5fe12a1b-277f-429e-a6b8-a874ec6e4918-memberlist\") pod \"speaker-bmwgt\" (UID: \"5fe12a1b-277f-429e-a6b8-a874ec6e4918\") " pod="metallb-system/speaker-bmwgt" Jan 29 16:58:45 crc kubenswrapper[4886]: I0129 16:58:45.956612 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:46 crc kubenswrapper[4886]: I0129 16:58:46.041007 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bmwgt" Jan 29 16:58:46 crc kubenswrapper[4886]: I0129 16:58:46.347766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bmwgt" event={"ID":"5fe12a1b-277f-429e-a6b8-a874ec6e4918","Type":"ContainerStarted","Data":"35142a1b6f288abc0ba405b57207c5e5432fb9b2dea12b9cab7fe98330e632fc"} Jan 29 16:58:46 crc kubenswrapper[4886]: I0129 16:58:46.425926 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w"] Jan 29 16:58:47 crc kubenswrapper[4886]: I0129 16:58:47.356695 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bmwgt" event={"ID":"5fe12a1b-277f-429e-a6b8-a874ec6e4918","Type":"ContainerStarted","Data":"b53067bd49090ab3d385aa12303839c8e5c71a3df115717d83b06d14d270017a"} Jan 29 16:58:47 crc kubenswrapper[4886]: I0129 16:58:47.356997 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bmwgt" Jan 29 16:58:47 crc kubenswrapper[4886]: I0129 16:58:47.357008 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bmwgt" event={"ID":"5fe12a1b-277f-429e-a6b8-a874ec6e4918","Type":"ContainerStarted","Data":"032bc1216d2da6c5bf637ba863a902307edea9aad036b24c6a8eaaeb30a8233a"} Jan 29 16:58:47 crc kubenswrapper[4886]: I0129 16:58:47.357909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" event={"ID":"cf3feb5c-d348-4c0a-95c7-46f18db4687c","Type":"ContainerStarted","Data":"3c2a2e57cd9f8d3c302221dc22b2b96ce896d9e8b852e3c8adbb7972202481b5"} Jan 29 16:58:48 crc kubenswrapper[4886]: I0129 16:58:48.637105 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bmwgt" podStartSLOduration=4.637059134 podStartE2EDuration="4.637059134s" podCreationTimestamp="2026-01-29 16:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:47.37749308 +0000 UTC m=+2210.286212352" watchObservedRunningTime="2026-01-29 16:58:48.637059134 +0000 UTC m=+2211.545778406" Jan 29 16:58:49 crc kubenswrapper[4886]: E0129 16:58:49.628567 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" Jan 29 16:58:54 crc kubenswrapper[4886]: I0129 16:58:54.417950 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" event={"ID":"cf3feb5c-d348-4c0a-95c7-46f18db4687c","Type":"ContainerStarted","Data":"679ea7191cf4d24a40aab69fd3b514e325f6feaeb59a810116b9ebd2cc7deaf6"} Jan 29 16:58:54 crc kubenswrapper[4886]: I0129 16:58:54.418591 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:58:54 crc kubenswrapper[4886]: I0129 16:58:54.433522 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" podStartSLOduration=2.873191484 podStartE2EDuration="10.433506844s" podCreationTimestamp="2026-01-29 16:58:44 +0000 UTC" firstStartedPulling="2026-01-29 16:58:46.44006294 +0000 UTC m=+2209.348782212" lastFinishedPulling="2026-01-29 16:58:54.00037829 +0000 UTC m=+2216.909097572" observedRunningTime="2026-01-29 16:58:54.432161387 +0000 UTC m=+2217.340880649" watchObservedRunningTime="2026-01-29 16:58:54.433506844 +0000 UTC m=+2217.342226116" Jan 29 16:58:54 crc kubenswrapper[4886]: I0129 16:58:54.557127 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-tlnpb" Jan 29 16:58:56 crc kubenswrapper[4886]: I0129 16:58:56.044932 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bmwgt" Jan 29 16:58:57 crc kubenswrapper[4886]: I0129 16:58:57.440792 4886 generic.go:334] "Generic (PLEG): container finished" podID="daa4e7b8-3078-4fd1-bb04-5185fa474080" containerID="21a0b606d61a6f6e26359e90b9f7f02797b091203f85dbe1eddb9a5153dee23b" exitCode=0 Jan 29 16:58:57 crc kubenswrapper[4886]: I0129 16:58:57.440853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerDied","Data":"21a0b606d61a6f6e26359e90b9f7f02797b091203f85dbe1eddb9a5153dee23b"} Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.450254 4886 generic.go:334] "Generic (PLEG): container finished" podID="daa4e7b8-3078-4fd1-bb04-5185fa474080" containerID="25827c5b02bfd7316f2f248eed60d598e7cf7efa786c464135e6dbd21e55a8a1" exitCode=0 Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.450299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerDied","Data":"25827c5b02bfd7316f2f248eed60d598e7cf7efa786c464135e6dbd21e55a8a1"} Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.746079 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qnwgz"] Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.747710 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.751922 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.753486 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lr84b" Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.755211 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.766979 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnwgz"] Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.834944 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrd5z\" (UniqueName: \"kubernetes.io/projected/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2-kube-api-access-lrd5z\") pod \"openstack-operator-index-qnwgz\" (UID: \"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2\") " pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.936034 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrd5z\" (UniqueName: \"kubernetes.io/projected/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2-kube-api-access-lrd5z\") pod \"openstack-operator-index-qnwgz\" (UID: \"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2\") " pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:58:58 crc kubenswrapper[4886]: I0129 16:58:58.956174 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrd5z\" (UniqueName: \"kubernetes.io/projected/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2-kube-api-access-lrd5z\") pod \"openstack-operator-index-qnwgz\" (UID: \"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2\") " pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:58:59 crc kubenswrapper[4886]: I0129 16:58:59.076909 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:58:59 crc kubenswrapper[4886]: I0129 16:58:59.466900 4886 generic.go:334] "Generic (PLEG): container finished" podID="daa4e7b8-3078-4fd1-bb04-5185fa474080" containerID="acb129ab0206aca82377f91455ef6b325a4f1c3434d95c34a20c88225efd4c3d" exitCode=0 Jan 29 16:58:59 crc kubenswrapper[4886]: I0129 16:58:59.466996 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerDied","Data":"acb129ab0206aca82377f91455ef6b325a4f1c3434d95c34a20c88225efd4c3d"} Jan 29 16:58:59 crc kubenswrapper[4886]: I0129 16:58:59.483174 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnwgz"] Jan 29 16:59:00 crc kubenswrapper[4886]: I0129 16:59:00.476131 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnwgz" event={"ID":"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2","Type":"ContainerStarted","Data":"ee5a97170b8f7a7e021d72b474ef4b841031a4d1f2600ead3a0c2d42211558e6"} Jan 29 16:59:00 crc kubenswrapper[4886]: I0129 16:59:00.479587 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"4203e14a2ac44c65e3cc097c8472d981365bf56aa09b734316f98f7b8be42d92"} Jan 29 16:59:00 crc kubenswrapper[4886]: I0129 16:59:00.479640 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"75f3a507fe6d1f628a92fc5718710f0607717ba847441497f169ee297b0a6694"} Jan 29 16:59:00 crc kubenswrapper[4886]: I0129 16:59:00.479658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"63cc1e42952fa8b15a3d002ad4ffa98bda98d1621b5eefcee2604097c29d2b66"} Jan 29 16:59:01 crc kubenswrapper[4886]: I0129 16:59:01.493818 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"b1a7f010389fbb8f26d68585c70e110e3a0ae726f93f2cc2ac75c9567a80bb2f"} Jan 29 16:59:01 crc kubenswrapper[4886]: I0129 16:59:01.739007 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qnwgz"] Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.332792 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ddcl7"] Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.335018 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.354339 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ddcl7"] Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.392510 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjddc\" (UniqueName: \"kubernetes.io/projected/9b2b35ba-9f49-4dd6-816d-6acc4e54e514-kube-api-access-mjddc\") pod \"openstack-operator-index-ddcl7\" (UID: \"9b2b35ba-9f49-4dd6-816d-6acc4e54e514\") " pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.493647 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjddc\" (UniqueName: \"kubernetes.io/projected/9b2b35ba-9f49-4dd6-816d-6acc4e54e514-kube-api-access-mjddc\") pod \"openstack-operator-index-ddcl7\" (UID: \"9b2b35ba-9f49-4dd6-816d-6acc4e54e514\") " pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.535278 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjddc\" (UniqueName: \"kubernetes.io/projected/9b2b35ba-9f49-4dd6-816d-6acc4e54e514-kube-api-access-mjddc\") pod \"openstack-operator-index-ddcl7\" (UID: \"9b2b35ba-9f49-4dd6-816d-6acc4e54e514\") " pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:02 crc kubenswrapper[4886]: I0129 16:59:02.654938 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:03 crc kubenswrapper[4886]: I0129 16:59:03.556991 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"e38c6d3208f1788c5bfe3f357bef3c7c2a8bead2f458bdb21881d45d2fbb1f99"} Jan 29 16:59:03 crc kubenswrapper[4886]: I0129 16:59:03.654579 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ddcl7"] Jan 29 16:59:04 crc kubenswrapper[4886]: I0129 16:59:04.573191 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4pt6" event={"ID":"daa4e7b8-3078-4fd1-bb04-5185fa474080","Type":"ContainerStarted","Data":"4a04d113b40bcf9ea2910cae42f1486fc968739032f04351da7b23a47184f7d1"} Jan 29 16:59:04 crc kubenswrapper[4886]: I0129 16:59:04.573585 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:59:04 crc kubenswrapper[4886]: I0129 16:59:04.622300 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b4pt6" podStartSLOduration=9.140252572 podStartE2EDuration="20.622271226s" podCreationTimestamp="2026-01-29 16:58:44 +0000 UTC" firstStartedPulling="2026-01-29 16:58:45.195024463 +0000 UTC m=+2208.103743735" lastFinishedPulling="2026-01-29 16:58:56.677043117 +0000 UTC m=+2219.585762389" observedRunningTime="2026-01-29 16:59:04.616416012 +0000 UTC m=+2227.525135324" watchObservedRunningTime="2026-01-29 16:59:04.622271226 +0000 UTC m=+2227.530990518" Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.043845 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.098515 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.583492 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ddcl7" event={"ID":"9b2b35ba-9f49-4dd6-816d-6acc4e54e514","Type":"ContainerStarted","Data":"ee24296722ba86dce412919d1af258d029c8c32cfa4628ead2f77068d6c1ed4f"} Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.583828 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ddcl7" event={"ID":"9b2b35ba-9f49-4dd6-816d-6acc4e54e514","Type":"ContainerStarted","Data":"30f7e589be86b09188a992f305fc1177bcd24a4fc997eea3ff9f03b9b9cb6b77"} Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.585482 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qnwgz" podUID="ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" containerName="registry-server" containerID="cri-o://a0b214de91e150b6b957d8f5429ccb90584e319fc888745b1be949d8551e92d4" gracePeriod=2 Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.585893 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnwgz" event={"ID":"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2","Type":"ContainerStarted","Data":"a0b214de91e150b6b957d8f5429ccb90584e319fc888745b1be949d8551e92d4"} Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.604053 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ddcl7" podStartSLOduration=3.193870964 podStartE2EDuration="3.604033773s" podCreationTimestamp="2026-01-29 16:59:02 +0000 UTC" firstStartedPulling="2026-01-29 16:59:04.762775246 +0000 UTC m=+2227.671494518" lastFinishedPulling="2026-01-29 16:59:05.172938055 +0000 UTC m=+2228.081657327" observedRunningTime="2026-01-29 16:59:05.602707335 +0000 UTC m=+2228.511426627" watchObservedRunningTime="2026-01-29 16:59:05.604033773 +0000 UTC m=+2228.512753045" Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.624282 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qnwgz" podStartSLOduration=2.349477144 podStartE2EDuration="7.624263111s" podCreationTimestamp="2026-01-29 16:58:58 +0000 UTC" firstStartedPulling="2026-01-29 16:58:59.499215124 +0000 UTC m=+2222.407934396" lastFinishedPulling="2026-01-29 16:59:04.774001091 +0000 UTC m=+2227.682720363" observedRunningTime="2026-01-29 16:59:05.619847367 +0000 UTC m=+2228.528566649" watchObservedRunningTime="2026-01-29 16:59:05.624263111 +0000 UTC m=+2228.532982383" Jan 29 16:59:05 crc kubenswrapper[4886]: I0129 16:59:05.962556 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x455w" Jan 29 16:59:09 crc kubenswrapper[4886]: I0129 16:59:09.077215 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:59:09 crc kubenswrapper[4886]: I0129 16:59:09.628134 4886 generic.go:334] "Generic (PLEG): container finished" podID="ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" containerID="a0b214de91e150b6b957d8f5429ccb90584e319fc888745b1be949d8551e92d4" exitCode=0 Jan 29 16:59:09 crc kubenswrapper[4886]: I0129 16:59:09.628228 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnwgz" event={"ID":"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2","Type":"ContainerDied","Data":"a0b214de91e150b6b957d8f5429ccb90584e319fc888745b1be949d8551e92d4"} Jan 29 16:59:09 crc kubenswrapper[4886]: I0129 16:59:09.811153 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:59:09 crc kubenswrapper[4886]: I0129 16:59:09.920122 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrd5z\" (UniqueName: \"kubernetes.io/projected/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2-kube-api-access-lrd5z\") pod \"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2\" (UID: \"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2\") " Jan 29 16:59:09 crc kubenswrapper[4886]: I0129 16:59:09.925676 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2-kube-api-access-lrd5z" (OuterVolumeSpecName: "kube-api-access-lrd5z") pod "ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" (UID: "ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2"). InnerVolumeSpecName "kube-api-access-lrd5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.021969 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrd5z\" (UniqueName: \"kubernetes.io/projected/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2-kube-api-access-lrd5z\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.643492 4886 generic.go:334] "Generic (PLEG): container finished" podID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerID="ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657" exitCode=0 Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.643558 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4fv5" event={"ID":"3e333f39-f93b-4066-8e9f-4bd27e4d3672","Type":"ContainerDied","Data":"ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657"} Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.647830 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnwgz" event={"ID":"ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2","Type":"ContainerDied","Data":"ee5a97170b8f7a7e021d72b474ef4b841031a4d1f2600ead3a0c2d42211558e6"} Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.647908 4886 scope.go:117] "RemoveContainer" containerID="a0b214de91e150b6b957d8f5429ccb90584e319fc888745b1be949d8551e92d4" Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.647970 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnwgz" Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.721054 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qnwgz"] Jan 29 16:59:10 crc kubenswrapper[4886]: I0129 16:59:10.732481 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qnwgz"] Jan 29 16:59:12 crc kubenswrapper[4886]: I0129 16:59:12.625199 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" path="/var/lib/kubelet/pods/ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2/volumes" Jan 29 16:59:12 crc kubenswrapper[4886]: I0129 16:59:12.656231 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:12 crc kubenswrapper[4886]: I0129 16:59:12.656275 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:12 crc kubenswrapper[4886]: I0129 16:59:12.684872 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:12 crc kubenswrapper[4886]: I0129 16:59:12.712027 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ddcl7" Jan 29 16:59:14 crc kubenswrapper[4886]: I0129 16:59:14.682853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4fv5" event={"ID":"3e333f39-f93b-4066-8e9f-4bd27e4d3672","Type":"ContainerStarted","Data":"9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1"} Jan 29 16:59:14 crc kubenswrapper[4886]: I0129 16:59:14.714748 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4fv5" podStartSLOduration=2.964061207 podStartE2EDuration="2m53.714733031s" podCreationTimestamp="2026-01-29 16:56:21 +0000 UTC" firstStartedPulling="2026-01-29 16:56:23.12173226 +0000 UTC m=+2066.030451532" lastFinishedPulling="2026-01-29 16:59:13.872404084 +0000 UTC m=+2236.781123356" observedRunningTime="2026-01-29 16:59:14.713858356 +0000 UTC m=+2237.622577618" watchObservedRunningTime="2026-01-29 16:59:14.714733031 +0000 UTC m=+2237.623452303" Jan 29 16:59:15 crc kubenswrapper[4886]: I0129 16:59:15.068720 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b4pt6" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.183179 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp"] Jan 29 16:59:18 crc kubenswrapper[4886]: E0129 16:59:18.183906 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" containerName="registry-server" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.183921 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" containerName="registry-server" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.184128 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf6312e-f5f4-4cdf-89f6-eca0052b4ce2" containerName="registry-server" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.185439 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.188397 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-m8266" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.203400 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp"] Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.386790 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdxz9\" (UniqueName: \"kubernetes.io/projected/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-kube-api-access-qdxz9\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.387008 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-util\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.387080 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-bundle\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.488455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-bundle\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.488585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdxz9\" (UniqueName: \"kubernetes.io/projected/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-kube-api-access-qdxz9\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.488772 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-util\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.488967 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-bundle\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.489247 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-util\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.533728 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdxz9\" (UniqueName: \"kubernetes.io/projected/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-kube-api-access-qdxz9\") pod \"39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:18 crc kubenswrapper[4886]: I0129 16:59:18.810736 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:19 crc kubenswrapper[4886]: W0129 16:59:19.277929 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5eb87e5_9a66_4bf3_8348_1dc03c7e0e8e.slice/crio-fb2914aa8dc2c93108ec5ed30e7b3b77724878da0cb11ac6bf0a6c92f19837f6 WatchSource:0}: Error finding container fb2914aa8dc2c93108ec5ed30e7b3b77724878da0cb11ac6bf0a6c92f19837f6: Status 404 returned error can't find the container with id fb2914aa8dc2c93108ec5ed30e7b3b77724878da0cb11ac6bf0a6c92f19837f6 Jan 29 16:59:19 crc kubenswrapper[4886]: I0129 16:59:19.282416 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp"] Jan 29 16:59:19 crc kubenswrapper[4886]: I0129 16:59:19.724920 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" event={"ID":"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e","Type":"ContainerStarted","Data":"fb2914aa8dc2c93108ec5ed30e7b3b77724878da0cb11ac6bf0a6c92f19837f6"} Jan 29 16:59:20 crc kubenswrapper[4886]: I0129 16:59:20.733637 4886 generic.go:334] "Generic (PLEG): container finished" podID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerID="f99f0442ec3925d7cfe1e552bc529dd0f7264a1bd5daec05d7d50b14d01e3241" exitCode=0 Jan 29 16:59:20 crc kubenswrapper[4886]: I0129 16:59:20.733675 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" event={"ID":"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e","Type":"ContainerDied","Data":"f99f0442ec3925d7cfe1e552bc529dd0f7264a1bd5daec05d7d50b14d01e3241"} Jan 29 16:59:22 crc kubenswrapper[4886]: I0129 16:59:22.129724 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:59:22 crc kubenswrapper[4886]: I0129 16:59:22.129799 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:59:22 crc kubenswrapper[4886]: I0129 16:59:22.190443 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:59:22 crc kubenswrapper[4886]: I0129 16:59:22.839743 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:59:23 crc kubenswrapper[4886]: I0129 16:59:23.530003 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4fv5"] Jan 29 16:59:24 crc kubenswrapper[4886]: I0129 16:59:24.776371 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4fv5" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="registry-server" containerID="cri-o://9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1" gracePeriod=2 Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.513705 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.608209 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfqs\" (UniqueName: \"kubernetes.io/projected/3e333f39-f93b-4066-8e9f-4bd27e4d3672-kube-api-access-6kfqs\") pod \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.608500 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-catalog-content\") pod \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.608584 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-utilities\") pod \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\" (UID: \"3e333f39-f93b-4066-8e9f-4bd27e4d3672\") " Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.609265 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-utilities" (OuterVolumeSpecName: "utilities") pod "3e333f39-f93b-4066-8e9f-4bd27e4d3672" (UID: "3e333f39-f93b-4066-8e9f-4bd27e4d3672"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.614672 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e333f39-f93b-4066-8e9f-4bd27e4d3672-kube-api-access-6kfqs" (OuterVolumeSpecName: "kube-api-access-6kfqs") pod "3e333f39-f93b-4066-8e9f-4bd27e4d3672" (UID: "3e333f39-f93b-4066-8e9f-4bd27e4d3672"). InnerVolumeSpecName "kube-api-access-6kfqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.642116 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e333f39-f93b-4066-8e9f-4bd27e4d3672" (UID: "3e333f39-f93b-4066-8e9f-4bd27e4d3672"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.711383 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kfqs\" (UniqueName: \"kubernetes.io/projected/3e333f39-f93b-4066-8e9f-4bd27e4d3672-kube-api-access-6kfqs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.711412 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.711427 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e333f39-f93b-4066-8e9f-4bd27e4d3672-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.788454 4886 generic.go:334] "Generic (PLEG): container finished" podID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerID="9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1" exitCode=0 Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.788528 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4fv5" event={"ID":"3e333f39-f93b-4066-8e9f-4bd27e4d3672","Type":"ContainerDied","Data":"9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1"} Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.788559 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4fv5" event={"ID":"3e333f39-f93b-4066-8e9f-4bd27e4d3672","Type":"ContainerDied","Data":"721f687c812954ac213bf098f41dc7b5630da2bcf0b09ba3c2bdd27881939e63"} Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.788575 4886 scope.go:117] "RemoveContainer" containerID="9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.788581 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4fv5" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.792319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" event={"ID":"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e","Type":"ContainerStarted","Data":"ec42cdfc44ca840cbb4fad62e8838ff084ffff3f56e86e59dc4375f0d43ac3af"} Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.809021 4886 scope.go:117] "RemoveContainer" containerID="ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.836350 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4fv5"] Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.837731 4886 scope.go:117] "RemoveContainer" containerID="54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.848091 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4fv5"] Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.857651 4886 scope.go:117] "RemoveContainer" containerID="9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1" Jan 29 16:59:25 crc kubenswrapper[4886]: E0129 16:59:25.858073 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1\": container with ID starting with 9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1 not found: ID does not exist" containerID="9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.858118 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1"} err="failed to get container status \"9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1\": rpc error: code = NotFound desc = could not find container \"9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1\": container with ID starting with 9db5ae6315c700c7878b8b6ad7193c1666b3b2cc58fcace9fc8e327a5fb5a0e1 not found: ID does not exist" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.858148 4886 scope.go:117] "RemoveContainer" containerID="ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657" Jan 29 16:59:25 crc kubenswrapper[4886]: E0129 16:59:25.858725 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657\": container with ID starting with ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657 not found: ID does not exist" containerID="ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.858747 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657"} err="failed to get container status \"ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657\": rpc error: code = NotFound desc = could not find container \"ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657\": container with ID starting with ce5c58afc7739fb2f46c85959cde9363860ba02d57d66af2be89058b5434f657 not found: ID does not exist" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.858763 4886 scope.go:117] "RemoveContainer" containerID="54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026" Jan 29 16:59:25 crc kubenswrapper[4886]: E0129 16:59:25.858982 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026\": container with ID starting with 54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026 not found: ID does not exist" containerID="54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026" Jan 29 16:59:25 crc kubenswrapper[4886]: I0129 16:59:25.859001 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026"} err="failed to get container status \"54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026\": rpc error: code = NotFound desc = could not find container \"54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026\": container with ID starting with 54c413f049295c75ea245b7bf5b81932f10621e4a5575c34da54c41a85be6026 not found: ID does not exist" Jan 29 16:59:26 crc kubenswrapper[4886]: I0129 16:59:26.642257 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" path="/var/lib/kubelet/pods/3e333f39-f93b-4066-8e9f-4bd27e4d3672/volumes" Jan 29 16:59:26 crc kubenswrapper[4886]: I0129 16:59:26.804555 4886 generic.go:334] "Generic (PLEG): container finished" podID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerID="ec42cdfc44ca840cbb4fad62e8838ff084ffff3f56e86e59dc4375f0d43ac3af" exitCode=0 Jan 29 16:59:26 crc kubenswrapper[4886]: I0129 16:59:26.804597 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" event={"ID":"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e","Type":"ContainerDied","Data":"ec42cdfc44ca840cbb4fad62e8838ff084ffff3f56e86e59dc4375f0d43ac3af"} Jan 29 16:59:27 crc kubenswrapper[4886]: I0129 16:59:27.818207 4886 generic.go:334] "Generic (PLEG): container finished" podID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerID="a9e2a8679df68561a70f930872f41fede0f43990d3a760447e1bc513acacd728" exitCode=0 Jan 29 16:59:27 crc kubenswrapper[4886]: I0129 16:59:27.818383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" event={"ID":"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e","Type":"ContainerDied","Data":"a9e2a8679df68561a70f930872f41fede0f43990d3a760447e1bc513acacd728"} Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.193574 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.274209 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-bundle\") pod \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.274384 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdxz9\" (UniqueName: \"kubernetes.io/projected/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-kube-api-access-qdxz9\") pod \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.274543 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-util\") pod \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\" (UID: \"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e\") " Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.284590 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-kube-api-access-qdxz9" (OuterVolumeSpecName: "kube-api-access-qdxz9") pod "c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" (UID: "c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e"). InnerVolumeSpecName "kube-api-access-qdxz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.290575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-util" (OuterVolumeSpecName: "util") pod "c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" (UID: "c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.291170 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-bundle" (OuterVolumeSpecName: "bundle") pod "c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" (UID: "c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.376775 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdxz9\" (UniqueName: \"kubernetes.io/projected/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-kube-api-access-qdxz9\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.376828 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.376848 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.660528 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.660582 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.842564 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" event={"ID":"c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e","Type":"ContainerDied","Data":"fb2914aa8dc2c93108ec5ed30e7b3b77724878da0cb11ac6bf0a6c92f19837f6"} Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.842921 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2914aa8dc2c93108ec5ed30e7b3b77724878da0cb11ac6bf0a6c92f19837f6" Jan 29 16:59:29 crc kubenswrapper[4886]: I0129 16:59:29.842982 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.423561 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf"] Jan 29 16:59:35 crc kubenswrapper[4886]: E0129 16:59:35.424576 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="extract-utilities" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424594 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="extract-utilities" Jan 29 16:59:35 crc kubenswrapper[4886]: E0129 16:59:35.424610 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="extract-content" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424618 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="extract-content" Jan 29 16:59:35 crc kubenswrapper[4886]: E0129 16:59:35.424634 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="pull" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424642 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="pull" Jan 29 16:59:35 crc kubenswrapper[4886]: E0129 16:59:35.424655 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="extract" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424662 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="extract" Jan 29 16:59:35 crc kubenswrapper[4886]: E0129 16:59:35.424673 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="registry-server" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424681 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="registry-server" Jan 29 16:59:35 crc kubenswrapper[4886]: E0129 16:59:35.424697 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="util" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424705 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="util" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424868 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e333f39-f93b-4066-8e9f-4bd27e4d3672" containerName="registry-server" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.424888 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e" containerName="extract" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.425563 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.435923 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf"] Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.444718 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vp8fr" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.588035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzj8t\" (UniqueName: \"kubernetes.io/projected/d4b791b8-523f-4cf0-9ec7-9283c2fd4dde-kube-api-access-dzj8t\") pod \"openstack-operator-controller-init-86bf76f8cb-r9sbf\" (UID: \"d4b791b8-523f-4cf0-9ec7-9283c2fd4dde\") " pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.689550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzj8t\" (UniqueName: \"kubernetes.io/projected/d4b791b8-523f-4cf0-9ec7-9283c2fd4dde-kube-api-access-dzj8t\") pod \"openstack-operator-controller-init-86bf76f8cb-r9sbf\" (UID: \"d4b791b8-523f-4cf0-9ec7-9283c2fd4dde\") " pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.711782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzj8t\" (UniqueName: \"kubernetes.io/projected/d4b791b8-523f-4cf0-9ec7-9283c2fd4dde-kube-api-access-dzj8t\") pod \"openstack-operator-controller-init-86bf76f8cb-r9sbf\" (UID: \"d4b791b8-523f-4cf0-9ec7-9283c2fd4dde\") " pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:35 crc kubenswrapper[4886]: I0129 16:59:35.744927 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:36 crc kubenswrapper[4886]: I0129 16:59:36.208892 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf"] Jan 29 16:59:36 crc kubenswrapper[4886]: I0129 16:59:36.903889 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" event={"ID":"d4b791b8-523f-4cf0-9ec7-9283c2fd4dde","Type":"ContainerStarted","Data":"ce6fb9f5ef9512b738ac1d0e983bd606f5bfc0e429cc5a338bcde6ac28bc6c37"} Jan 29 16:59:40 crc kubenswrapper[4886]: I0129 16:59:40.863518 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l6kcr"] Jan 29 16:59:40 crc kubenswrapper[4886]: I0129 16:59:40.865695 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:40 crc kubenswrapper[4886]: I0129 16:59:40.891524 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6kcr"] Jan 29 16:59:40 crc kubenswrapper[4886]: I0129 16:59:40.988135 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-catalog-content\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:40 crc kubenswrapper[4886]: I0129 16:59:40.988182 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2tz\" (UniqueName: \"kubernetes.io/projected/ef4834a8-a534-49a1-ba4e-07543a1d73ff-kube-api-access-hp2tz\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:40 crc kubenswrapper[4886]: I0129 16:59:40.988280 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-utilities\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.089658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-catalog-content\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.089708 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2tz\" (UniqueName: \"kubernetes.io/projected/ef4834a8-a534-49a1-ba4e-07543a1d73ff-kube-api-access-hp2tz\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.089801 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-utilities\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.090159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-catalog-content\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.090181 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-utilities\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.113393 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2tz\" (UniqueName: \"kubernetes.io/projected/ef4834a8-a534-49a1-ba4e-07543a1d73ff-kube-api-access-hp2tz\") pod \"redhat-operators-l6kcr\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.193735 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:41 crc kubenswrapper[4886]: W0129 16:59:41.656561 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef4834a8_a534_49a1_ba4e_07543a1d73ff.slice/crio-49c5189fe2f9c09dc98e6dd9490ed2837b141ee31dfec16f46c3e6f0f0ff2d94 WatchSource:0}: Error finding container 49c5189fe2f9c09dc98e6dd9490ed2837b141ee31dfec16f46c3e6f0f0ff2d94: Status 404 returned error can't find the container with id 49c5189fe2f9c09dc98e6dd9490ed2837b141ee31dfec16f46c3e6f0f0ff2d94 Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.659868 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6kcr"] Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.960200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" event={"ID":"d4b791b8-523f-4cf0-9ec7-9283c2fd4dde","Type":"ContainerStarted","Data":"a36a7c6e2180ec2f8bc93353d652a312e752e3911260d82dbdb2decdd7be960d"} Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.961766 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.963607 4886 generic.go:334] "Generic (PLEG): container finished" podID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerID="7f55f9c6228c0244d9e3c7e38d2569229b65bf9a7ae3d928099a3cfae5ca1622" exitCode=0 Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.963644 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6kcr" event={"ID":"ef4834a8-a534-49a1-ba4e-07543a1d73ff","Type":"ContainerDied","Data":"7f55f9c6228c0244d9e3c7e38d2569229b65bf9a7ae3d928099a3cfae5ca1622"} Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.963665 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6kcr" event={"ID":"ef4834a8-a534-49a1-ba4e-07543a1d73ff","Type":"ContainerStarted","Data":"49c5189fe2f9c09dc98e6dd9490ed2837b141ee31dfec16f46c3e6f0f0ff2d94"} Jan 29 16:59:41 crc kubenswrapper[4886]: I0129 16:59:41.995306 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" podStartSLOduration=2.465965501 podStartE2EDuration="6.995289115s" podCreationTimestamp="2026-01-29 16:59:35 +0000 UTC" firstStartedPulling="2026-01-29 16:59:36.21964987 +0000 UTC m=+2259.128369142" lastFinishedPulling="2026-01-29 16:59:40.748973484 +0000 UTC m=+2263.657692756" observedRunningTime="2026-01-29 16:59:41.993106334 +0000 UTC m=+2264.901825626" watchObservedRunningTime="2026-01-29 16:59:41.995289115 +0000 UTC m=+2264.904008397" Jan 29 16:59:42 crc kubenswrapper[4886]: E0129 16:59:42.955365 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef4834a8_a534_49a1_ba4e_07543a1d73ff.slice/crio-36216559b8eb83c21708f2fd9d52738d4492c30985da6e15593311023eaff4e2.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:59:43 crc kubenswrapper[4886]: I0129 16:59:43.980020 4886 generic.go:334] "Generic (PLEG): container finished" podID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerID="36216559b8eb83c21708f2fd9d52738d4492c30985da6e15593311023eaff4e2" exitCode=0 Jan 29 16:59:43 crc kubenswrapper[4886]: I0129 16:59:43.980111 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6kcr" event={"ID":"ef4834a8-a534-49a1-ba4e-07543a1d73ff","Type":"ContainerDied","Data":"36216559b8eb83c21708f2fd9d52738d4492c30985da6e15593311023eaff4e2"} Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.674297 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftjt4"] Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.676642 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.679681 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftjt4"] Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.745102 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-catalog-content\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.745191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-utilities\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.745219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rhq\" (UniqueName: \"kubernetes.io/projected/15a7d478-4fe8-4737-87e0-092b2309852b-kube-api-access-l9rhq\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.846445 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-utilities\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.846502 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rhq\" (UniqueName: \"kubernetes.io/projected/15a7d478-4fe8-4737-87e0-092b2309852b-kube-api-access-l9rhq\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.846602 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-catalog-content\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.847147 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-catalog-content\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.847146 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-utilities\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.866206 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rhq\" (UniqueName: \"kubernetes.io/projected/15a7d478-4fe8-4737-87e0-092b2309852b-kube-api-access-l9rhq\") pod \"community-operators-ftjt4\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:44 crc kubenswrapper[4886]: I0129 16:59:44.989882 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6kcr" event={"ID":"ef4834a8-a534-49a1-ba4e-07543a1d73ff","Type":"ContainerStarted","Data":"fc506e5a1038e5ccbab48d49928b287cd8545ce7830f69a33adf11734bda8aaf"} Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.002850 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.013267 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l6kcr" podStartSLOduration=2.505858937 podStartE2EDuration="5.013244437s" podCreationTimestamp="2026-01-29 16:59:40 +0000 UTC" firstStartedPulling="2026-01-29 16:59:41.96559099 +0000 UTC m=+2264.874310262" lastFinishedPulling="2026-01-29 16:59:44.47297649 +0000 UTC m=+2267.381695762" observedRunningTime="2026-01-29 16:59:45.009611664 +0000 UTC m=+2267.918330936" watchObservedRunningTime="2026-01-29 16:59:45.013244437 +0000 UTC m=+2267.921963709" Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.538899 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftjt4"] Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.748365 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-86bf76f8cb-r9sbf" Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.998583 4886 generic.go:334] "Generic (PLEG): container finished" podID="15a7d478-4fe8-4737-87e0-092b2309852b" containerID="958784e76577e7087aaa7c7d11f4f78ba2b156b2be0c93f2ecfe7b0844514e68" exitCode=0 Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.998802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerDied","Data":"958784e76577e7087aaa7c7d11f4f78ba2b156b2be0c93f2ecfe7b0844514e68"} Jan 29 16:59:45 crc kubenswrapper[4886]: I0129 16:59:45.998850 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerStarted","Data":"08d43d29ab5b2356ae9a1a801ed2dac107c26afe2209d1165714e6d9a8ed91ec"} Jan 29 16:59:47 crc kubenswrapper[4886]: I0129 16:59:47.006833 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerStarted","Data":"ab0631061378de0825d90277572d0835271d2870feb942a13be33aaadea313be"} Jan 29 16:59:48 crc kubenswrapper[4886]: I0129 16:59:48.017209 4886 generic.go:334] "Generic (PLEG): container finished" podID="15a7d478-4fe8-4737-87e0-092b2309852b" containerID="ab0631061378de0825d90277572d0835271d2870feb942a13be33aaadea313be" exitCode=0 Jan 29 16:59:48 crc kubenswrapper[4886]: I0129 16:59:48.017258 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerDied","Data":"ab0631061378de0825d90277572d0835271d2870feb942a13be33aaadea313be"} Jan 29 16:59:48 crc kubenswrapper[4886]: I0129 16:59:48.019267 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:59:51 crc kubenswrapper[4886]: I0129 16:59:51.194647 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:51 crc kubenswrapper[4886]: I0129 16:59:51.195030 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:51 crc kubenswrapper[4886]: I0129 16:59:51.245472 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:52 crc kubenswrapper[4886]: I0129 16:59:52.048770 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerStarted","Data":"178a4fc4a6ab2ef1c06ebd2a559deefd40e2d485747bf60722673762411e0255"} Jan 29 16:59:52 crc kubenswrapper[4886]: I0129 16:59:52.072819 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftjt4" podStartSLOduration=3.251168691 podStartE2EDuration="8.072796351s" podCreationTimestamp="2026-01-29 16:59:44 +0000 UTC" firstStartedPulling="2026-01-29 16:59:45.999971881 +0000 UTC m=+2268.908691153" lastFinishedPulling="2026-01-29 16:59:50.821599551 +0000 UTC m=+2273.730318813" observedRunningTime="2026-01-29 16:59:52.066093532 +0000 UTC m=+2274.974812814" watchObservedRunningTime="2026-01-29 16:59:52.072796351 +0000 UTC m=+2274.981515623" Jan 29 16:59:52 crc kubenswrapper[4886]: I0129 16:59:52.094854 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:54 crc kubenswrapper[4886]: I0129 16:59:54.259529 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6kcr"] Jan 29 16:59:54 crc kubenswrapper[4886]: I0129 16:59:54.260083 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l6kcr" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="registry-server" containerID="cri-o://fc506e5a1038e5ccbab48d49928b287cd8545ce7830f69a33adf11734bda8aaf" gracePeriod=2 Jan 29 16:59:54 crc kubenswrapper[4886]: E0129 16:59:54.928890 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef4834a8_a534_49a1_ba4e_07543a1d73ff.slice/crio-conmon-fc506e5a1038e5ccbab48d49928b287cd8545ce7830f69a33adf11734bda8aaf.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.003113 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.003177 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.062157 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.085782 4886 generic.go:334] "Generic (PLEG): container finished" podID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerID="fc506e5a1038e5ccbab48d49928b287cd8545ce7830f69a33adf11734bda8aaf" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.086838 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6kcr" event={"ID":"ef4834a8-a534-49a1-ba4e-07543a1d73ff","Type":"ContainerDied","Data":"fc506e5a1038e5ccbab48d49928b287cd8545ce7830f69a33adf11734bda8aaf"} Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.222290 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.317898 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-utilities\") pod \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.317966 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp2tz\" (UniqueName: \"kubernetes.io/projected/ef4834a8-a534-49a1-ba4e-07543a1d73ff-kube-api-access-hp2tz\") pod \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.318101 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-catalog-content\") pod \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\" (UID: \"ef4834a8-a534-49a1-ba4e-07543a1d73ff\") " Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.319027 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-utilities" (OuterVolumeSpecName: "utilities") pod "ef4834a8-a534-49a1-ba4e-07543a1d73ff" (UID: "ef4834a8-a534-49a1-ba4e-07543a1d73ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.325965 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4834a8-a534-49a1-ba4e-07543a1d73ff-kube-api-access-hp2tz" (OuterVolumeSpecName: "kube-api-access-hp2tz") pod "ef4834a8-a534-49a1-ba4e-07543a1d73ff" (UID: "ef4834a8-a534-49a1-ba4e-07543a1d73ff"). InnerVolumeSpecName "kube-api-access-hp2tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.420102 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.420141 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp2tz\" (UniqueName: \"kubernetes.io/projected/ef4834a8-a534-49a1-ba4e-07543a1d73ff-kube-api-access-hp2tz\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.467441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef4834a8-a534-49a1-ba4e-07543a1d73ff" (UID: "ef4834a8-a534-49a1-ba4e-07543a1d73ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4886]: I0129 16:59:55.521208 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef4834a8-a534-49a1-ba4e-07543a1d73ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.095908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6kcr" event={"ID":"ef4834a8-a534-49a1-ba4e-07543a1d73ff","Type":"ContainerDied","Data":"49c5189fe2f9c09dc98e6dd9490ed2837b141ee31dfec16f46c3e6f0f0ff2d94"} Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.095974 4886 scope.go:117] "RemoveContainer" containerID="fc506e5a1038e5ccbab48d49928b287cd8545ce7830f69a33adf11734bda8aaf" Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.095998 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6kcr" Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.119929 4886 scope.go:117] "RemoveContainer" containerID="36216559b8eb83c21708f2fd9d52738d4492c30985da6e15593311023eaff4e2" Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.128271 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6kcr"] Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.135544 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l6kcr"] Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.160713 4886 scope.go:117] "RemoveContainer" containerID="7f55f9c6228c0244d9e3c7e38d2569229b65bf9a7ae3d928099a3cfae5ca1622" Jan 29 16:59:56 crc kubenswrapper[4886]: I0129 16:59:56.625377 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" path="/var/lib/kubelet/pods/ef4834a8-a534-49a1-ba4e-07543a1d73ff/volumes" Jan 29 16:59:59 crc kubenswrapper[4886]: I0129 16:59:59.660762 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:59:59 crc kubenswrapper[4886]: I0129 16:59:59.661403 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.134669 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666"] Jan 29 17:00:00 crc kubenswrapper[4886]: E0129 17:00:00.136558 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="registry-server" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.136582 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="registry-server" Jan 29 17:00:00 crc kubenswrapper[4886]: E0129 17:00:00.136604 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="extract-utilities" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.136612 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="extract-utilities" Jan 29 17:00:00 crc kubenswrapper[4886]: E0129 17:00:00.136627 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="extract-content" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.136634 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="extract-content" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.136812 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4834a8-a534-49a1-ba4e-07543a1d73ff" containerName="registry-server" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.137380 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.139045 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.139390 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.153318 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666"] Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.200934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wcxn\" (UniqueName: \"kubernetes.io/projected/3da2d212-de01-458b-9805-8eb21ed83324-kube-api-access-4wcxn\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.201188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3da2d212-de01-458b-9805-8eb21ed83324-secret-volume\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.201590 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da2d212-de01-458b-9805-8eb21ed83324-config-volume\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.303314 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da2d212-de01-458b-9805-8eb21ed83324-config-volume\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.303398 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wcxn\" (UniqueName: \"kubernetes.io/projected/3da2d212-de01-458b-9805-8eb21ed83324-kube-api-access-4wcxn\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.303450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3da2d212-de01-458b-9805-8eb21ed83324-secret-volume\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.304523 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da2d212-de01-458b-9805-8eb21ed83324-config-volume\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.316879 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3da2d212-de01-458b-9805-8eb21ed83324-secret-volume\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.320781 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wcxn\" (UniqueName: \"kubernetes.io/projected/3da2d212-de01-458b-9805-8eb21ed83324-kube-api-access-4wcxn\") pod \"collect-profiles-29495100-wk666\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.453250 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:00 crc kubenswrapper[4886]: I0129 17:00:00.875639 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666"] Jan 29 17:00:00 crc kubenswrapper[4886]: W0129 17:00:00.888007 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da2d212_de01_458b_9805_8eb21ed83324.slice/crio-948a7f05186c9ca396055303ccf91cac221f2785d4fcde2bf60418d979c118d9 WatchSource:0}: Error finding container 948a7f05186c9ca396055303ccf91cac221f2785d4fcde2bf60418d979c118d9: Status 404 returned error can't find the container with id 948a7f05186c9ca396055303ccf91cac221f2785d4fcde2bf60418d979c118d9 Jan 29 17:00:01 crc kubenswrapper[4886]: I0129 17:00:01.146818 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" event={"ID":"3da2d212-de01-458b-9805-8eb21ed83324","Type":"ContainerStarted","Data":"3f2a5d53f1118cb99d6ac0f75863b8e8419b33babb29267642e06437ed3d61f8"} Jan 29 17:00:01 crc kubenswrapper[4886]: I0129 17:00:01.147162 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" event={"ID":"3da2d212-de01-458b-9805-8eb21ed83324","Type":"ContainerStarted","Data":"948a7f05186c9ca396055303ccf91cac221f2785d4fcde2bf60418d979c118d9"} Jan 29 17:00:02 crc kubenswrapper[4886]: I0129 17:00:02.156603 4886 generic.go:334] "Generic (PLEG): container finished" podID="3da2d212-de01-458b-9805-8eb21ed83324" containerID="3f2a5d53f1118cb99d6ac0f75863b8e8419b33babb29267642e06437ed3d61f8" exitCode=0 Jan 29 17:00:02 crc kubenswrapper[4886]: I0129 17:00:02.156687 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" event={"ID":"3da2d212-de01-458b-9805-8eb21ed83324","Type":"ContainerDied","Data":"3f2a5d53f1118cb99d6ac0f75863b8e8419b33babb29267642e06437ed3d61f8"} Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.515948 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.567969 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da2d212-de01-458b-9805-8eb21ed83324-config-volume\") pod \"3da2d212-de01-458b-9805-8eb21ed83324\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.568166 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wcxn\" (UniqueName: \"kubernetes.io/projected/3da2d212-de01-458b-9805-8eb21ed83324-kube-api-access-4wcxn\") pod \"3da2d212-de01-458b-9805-8eb21ed83324\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.568379 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3da2d212-de01-458b-9805-8eb21ed83324-secret-volume\") pod \"3da2d212-de01-458b-9805-8eb21ed83324\" (UID: \"3da2d212-de01-458b-9805-8eb21ed83324\") " Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.569645 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da2d212-de01-458b-9805-8eb21ed83324-config-volume" (OuterVolumeSpecName: "config-volume") pod "3da2d212-de01-458b-9805-8eb21ed83324" (UID: "3da2d212-de01-458b-9805-8eb21ed83324"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.573874 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da2d212-de01-458b-9805-8eb21ed83324-kube-api-access-4wcxn" (OuterVolumeSpecName: "kube-api-access-4wcxn") pod "3da2d212-de01-458b-9805-8eb21ed83324" (UID: "3da2d212-de01-458b-9805-8eb21ed83324"). InnerVolumeSpecName "kube-api-access-4wcxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.575284 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da2d212-de01-458b-9805-8eb21ed83324-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3da2d212-de01-458b-9805-8eb21ed83324" (UID: "3da2d212-de01-458b-9805-8eb21ed83324"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.672188 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wcxn\" (UniqueName: \"kubernetes.io/projected/3da2d212-de01-458b-9805-8eb21ed83324-kube-api-access-4wcxn\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.672251 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3da2d212-de01-458b-9805-8eb21ed83324-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:03 crc kubenswrapper[4886]: I0129 17:00:03.672277 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da2d212-de01-458b-9805-8eb21ed83324-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:04 crc kubenswrapper[4886]: I0129 17:00:04.187901 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" event={"ID":"3da2d212-de01-458b-9805-8eb21ed83324","Type":"ContainerDied","Data":"948a7f05186c9ca396055303ccf91cac221f2785d4fcde2bf60418d979c118d9"} Jan 29 17:00:04 crc kubenswrapper[4886]: I0129 17:00:04.188250 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948a7f05186c9ca396055303ccf91cac221f2785d4fcde2bf60418d979c118d9" Jan 29 17:00:04 crc kubenswrapper[4886]: I0129 17:00:04.187992 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666" Jan 29 17:00:04 crc kubenswrapper[4886]: I0129 17:00:04.582479 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf"] Jan 29 17:00:04 crc kubenswrapper[4886]: I0129 17:00:04.599435 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-bkqmf"] Jan 29 17:00:04 crc kubenswrapper[4886]: I0129 17:00:04.626532 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9" path="/var/lib/kubelet/pods/a7a20685-8c41-4c3b-9b91-fe1e05cf5fe9/volumes" Jan 29 17:00:05 crc kubenswrapper[4886]: I0129 17:00:05.073109 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 17:00:05 crc kubenswrapper[4886]: I0129 17:00:05.136082 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftjt4"] Jan 29 17:00:05 crc kubenswrapper[4886]: I0129 17:00:05.196487 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftjt4" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="registry-server" containerID="cri-o://178a4fc4a6ab2ef1c06ebd2a559deefd40e2d485747bf60722673762411e0255" gracePeriod=2 Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.209990 4886 generic.go:334] "Generic (PLEG): container finished" podID="15a7d478-4fe8-4737-87e0-092b2309852b" containerID="178a4fc4a6ab2ef1c06ebd2a559deefd40e2d485747bf60722673762411e0255" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.210081 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerDied","Data":"178a4fc4a6ab2ef1c06ebd2a559deefd40e2d485747bf60722673762411e0255"} Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.339796 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.423170 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9rhq\" (UniqueName: \"kubernetes.io/projected/15a7d478-4fe8-4737-87e0-092b2309852b-kube-api-access-l9rhq\") pod \"15a7d478-4fe8-4737-87e0-092b2309852b\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.423248 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-catalog-content\") pod \"15a7d478-4fe8-4737-87e0-092b2309852b\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.423299 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-utilities\") pod \"15a7d478-4fe8-4737-87e0-092b2309852b\" (UID: \"15a7d478-4fe8-4737-87e0-092b2309852b\") " Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.424974 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-utilities" (OuterVolumeSpecName: "utilities") pod "15a7d478-4fe8-4737-87e0-092b2309852b" (UID: "15a7d478-4fe8-4737-87e0-092b2309852b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.429592 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a7d478-4fe8-4737-87e0-092b2309852b-kube-api-access-l9rhq" (OuterVolumeSpecName: "kube-api-access-l9rhq") pod "15a7d478-4fe8-4737-87e0-092b2309852b" (UID: "15a7d478-4fe8-4737-87e0-092b2309852b"). InnerVolumeSpecName "kube-api-access-l9rhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.490053 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15a7d478-4fe8-4737-87e0-092b2309852b" (UID: "15a7d478-4fe8-4737-87e0-092b2309852b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.525511 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9rhq\" (UniqueName: \"kubernetes.io/projected/15a7d478-4fe8-4737-87e0-092b2309852b-kube-api-access-l9rhq\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.525540 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4886]: I0129 17:00:06.525551 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7d478-4fe8-4737-87e0-092b2309852b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.221812 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftjt4" event={"ID":"15a7d478-4fe8-4737-87e0-092b2309852b","Type":"ContainerDied","Data":"08d43d29ab5b2356ae9a1a801ed2dac107c26afe2209d1165714e6d9a8ed91ec"} Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.222175 4886 scope.go:117] "RemoveContainer" containerID="178a4fc4a6ab2ef1c06ebd2a559deefd40e2d485747bf60722673762411e0255" Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.221927 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftjt4" Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.249276 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftjt4"] Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.251087 4886 scope.go:117] "RemoveContainer" containerID="ab0631061378de0825d90277572d0835271d2870feb942a13be33aaadea313be" Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.270496 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftjt4"] Jan 29 17:00:07 crc kubenswrapper[4886]: I0129 17:00:07.280591 4886 scope.go:117] "RemoveContainer" containerID="958784e76577e7087aaa7c7d11f4f78ba2b156b2be0c93f2ecfe7b0844514e68" Jan 29 17:00:08 crc kubenswrapper[4886]: I0129 17:00:08.625523 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" path="/var/lib/kubelet/pods/15a7d478-4fe8-4737-87e0-092b2309852b/volumes" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.936725 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz"] Jan 29 17:00:21 crc kubenswrapper[4886]: E0129 17:00:21.937620 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2d212-de01-458b-9805-8eb21ed83324" containerName="collect-profiles" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.937637 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2d212-de01-458b-9805-8eb21ed83324" containerName="collect-profiles" Jan 29 17:00:21 crc kubenswrapper[4886]: E0129 17:00:21.937666 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="extract-content" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.937673 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="extract-content" Jan 29 17:00:21 crc kubenswrapper[4886]: E0129 17:00:21.937689 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="registry-server" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.937695 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="registry-server" Jan 29 17:00:21 crc kubenswrapper[4886]: E0129 17:00:21.937711 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="extract-utilities" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.937717 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="extract-utilities" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.937859 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a7d478-4fe8-4737-87e0-092b2309852b" containerName="registry-server" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.937880 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da2d212-de01-458b-9805-8eb21ed83324" containerName="collect-profiles" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.938395 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6"] Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.938967 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.939206 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.941384 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-gnsp5" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.941439 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-br6j7" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.956763 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6"] Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.992831 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244l4\" (UniqueName: \"kubernetes.io/projected/4e16e340-e213-492a-9c93-851df7b1bddb-kube-api-access-244l4\") pod \"cinder-operator-controller-manager-8d874c8fc-w6qc6\" (UID: \"4e16e340-e213-492a-9c93-851df7b1bddb\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:00:21 crc kubenswrapper[4886]: I0129 17:00:21.992940 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjlpn\" (UniqueName: \"kubernetes.io/projected/3ffc5e8b-7f7a-4585-b43d-07e2589493c9-kube-api-access-mjlpn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-2g2cz\" (UID: \"3ffc5e8b-7f7a-4585-b43d-07e2589493c9\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.008221 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.074592 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.075761 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.078346 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xgnw7" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.085263 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.086392 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.090786 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hwqr9" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.095376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjlpn\" (UniqueName: \"kubernetes.io/projected/3ffc5e8b-7f7a-4585-b43d-07e2589493c9-kube-api-access-mjlpn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-2g2cz\" (UID: \"3ffc5e8b-7f7a-4585-b43d-07e2589493c9\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.095464 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-244l4\" (UniqueName: \"kubernetes.io/projected/4e16e340-e213-492a-9c93-851df7b1bddb-kube-api-access-244l4\") pod \"cinder-operator-controller-manager-8d874c8fc-w6qc6\" (UID: \"4e16e340-e213-492a-9c93-851df7b1bddb\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.095511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxpp\" (UniqueName: \"kubernetes.io/projected/d01e417c-a1b0-445d-83eb-f3c21a492138-kube-api-access-5rxpp\") pod \"designate-operator-controller-manager-6d9697b7f4-rhxnz\" (UID: \"d01e417c-a1b0-445d-83eb-f3c21a492138\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.098697 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.107824 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.109025 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.123421 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.124490 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.125135 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-h9dkd" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.130772 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hkrqg" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.139189 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.154007 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.158815 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjlpn\" (UniqueName: \"kubernetes.io/projected/3ffc5e8b-7f7a-4585-b43d-07e2589493c9-kube-api-access-mjlpn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-2g2cz\" (UID: \"3ffc5e8b-7f7a-4585-b43d-07e2589493c9\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.166842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-244l4\" (UniqueName: \"kubernetes.io/projected/4e16e340-e213-492a-9c93-851df7b1bddb-kube-api-access-244l4\") pod \"cinder-operator-controller-manager-8d874c8fc-w6qc6\" (UID: \"4e16e340-e213-492a-9c93-851df7b1bddb\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.169291 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5n28"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.170381 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.176736 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-94czq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.176937 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.188392 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.193969 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5n28"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpxk\" (UniqueName: \"kubernetes.io/projected/f2898e34-e423-4576-a765-3919510dcd85-kube-api-access-jgpxk\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200500 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2vc\" (UniqueName: \"kubernetes.io/projected/81b8c703-d895-41ce-8ca3-99fd6b6eecb6-kube-api-access-4t2vc\") pod \"horizon-operator-controller-manager-5fb775575f-4mmm8\" (UID: \"81b8c703-d895-41ce-8ca3-99fd6b6eecb6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxpp\" (UniqueName: \"kubernetes.io/projected/d01e417c-a1b0-445d-83eb-f3c21a492138-kube-api-access-5rxpp\") pod \"designate-operator-controller-manager-6d9697b7f4-rhxnz\" (UID: \"d01e417c-a1b0-445d-83eb-f3c21a492138\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckhl\" (UniqueName: \"kubernetes.io/projected/02decfa9-69fb-46b5-8b30-30954e39d411-kube-api-access-wckhl\") pod \"glance-operator-controller-manager-8886f4c47-pfw9c\" (UID: \"02decfa9-69fb-46b5-8b30-30954e39d411\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200590 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200610 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59v5v\" (UniqueName: \"kubernetes.io/projected/3c56c53e-a292-4e75-b069-c1d06ceeb6c5-kube-api-access-59v5v\") pod \"heat-operator-controller-manager-69d6db494d-qf2xg\" (UID: \"3c56c53e-a292-4e75-b069-c1d06ceeb6c5\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.200913 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.201928 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.205680 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-sf2sl" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.208634 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.209551 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.213672 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bp7xc" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.216210 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.217262 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.220155 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nk9m2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.228903 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.240794 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxpp\" (UniqueName: \"kubernetes.io/projected/d01e417c-a1b0-445d-83eb-f3c21a492138-kube-api-access-5rxpp\") pod \"designate-operator-controller-manager-6d9697b7f4-rhxnz\" (UID: \"d01e417c-a1b0-445d-83eb-f3c21a492138\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.249363 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.270580 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.277238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.277743 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.278929 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.283999 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wxpgb" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.294602 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.295818 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckhl\" (UniqueName: \"kubernetes.io/projected/02decfa9-69fb-46b5-8b30-30954e39d411-kube-api-access-wckhl\") pod \"glance-operator-controller-manager-8886f4c47-pfw9c\" (UID: \"02decfa9-69fb-46b5-8b30-30954e39d411\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59v5v\" (UniqueName: \"kubernetes.io/projected/3c56c53e-a292-4e75-b069-c1d06ceeb6c5-kube-api-access-59v5v\") pod \"heat-operator-controller-manager-69d6db494d-qf2xg\" (UID: \"3c56c53e-a292-4e75-b069-c1d06ceeb6c5\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303452 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5nj\" (UniqueName: \"kubernetes.io/projected/4c2d29a3-d017-4e76-9a82-02943a6b38bf-kube-api-access-pw5nj\") pod \"mariadb-operator-controller-manager-67bf948998-c4j5s\" (UID: \"4c2d29a3-d017-4e76-9a82-02943a6b38bf\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgpxk\" (UniqueName: \"kubernetes.io/projected/f2898e34-e423-4576-a765-3919510dcd85-kube-api-access-jgpxk\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303534 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdh8t\" (UniqueName: \"kubernetes.io/projected/70336809-8231-4ed9-a912-8b668aaa53bb-kube-api-access-bdh8t\") pod \"manila-operator-controller-manager-7dd968899f-zpgq2\" (UID: \"70336809-8231-4ed9-a912-8b668aaa53bb\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303579 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2w54\" (UniqueName: \"kubernetes.io/projected/10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0-kube-api-access-j2w54\") pod \"ironic-operator-controller-manager-5f4b8bd54d-77z62\" (UID: \"10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5skg\" (UniqueName: \"kubernetes.io/projected/67107e9f-cf09-4d35-af26-c77f4d76083a-kube-api-access-h5skg\") pod \"keystone-operator-controller-manager-84f48565d4-kwr4n\" (UID: \"67107e9f-cf09-4d35-af26-c77f4d76083a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.303624 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2vc\" (UniqueName: \"kubernetes.io/projected/81b8c703-d895-41ce-8ca3-99fd6b6eecb6-kube-api-access-4t2vc\") pod \"horizon-operator-controller-manager-5fb775575f-4mmm8\" (UID: \"81b8c703-d895-41ce-8ca3-99fd6b6eecb6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:00:22 crc kubenswrapper[4886]: E0129 17:00:22.304004 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:22 crc kubenswrapper[4886]: E0129 17:00:22.304041 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert podName:f2898e34-e423-4576-a765-3919510dcd85 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:22.804027197 +0000 UTC m=+2305.712746469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert") pod "infra-operator-controller-manager-79955696d6-t5n28" (UID: "f2898e34-e423-4576-a765-3919510dcd85") : secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.318399 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.319392 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.322621 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mvzxw" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.331239 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgpxk\" (UniqueName: \"kubernetes.io/projected/f2898e34-e423-4576-a765-3919510dcd85-kube-api-access-jgpxk\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.342007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59v5v\" (UniqueName: \"kubernetes.io/projected/3c56c53e-a292-4e75-b069-c1d06ceeb6c5-kube-api-access-59v5v\") pod \"heat-operator-controller-manager-69d6db494d-qf2xg\" (UID: \"3c56c53e-a292-4e75-b069-c1d06ceeb6c5\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.347189 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2vc\" (UniqueName: \"kubernetes.io/projected/81b8c703-d895-41ce-8ca3-99fd6b6eecb6-kube-api-access-4t2vc\") pod \"horizon-operator-controller-manager-5fb775575f-4mmm8\" (UID: \"81b8c703-d895-41ce-8ca3-99fd6b6eecb6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.352666 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.354442 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.354546 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckhl\" (UniqueName: \"kubernetes.io/projected/02decfa9-69fb-46b5-8b30-30954e39d411-kube-api-access-wckhl\") pod \"glance-operator-controller-manager-8886f4c47-pfw9c\" (UID: \"02decfa9-69fb-46b5-8b30-30954e39d411\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.359440 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gml7r" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.364847 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.394121 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.403623 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.405025 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.408127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw5nj\" (UniqueName: \"kubernetes.io/projected/4c2d29a3-d017-4e76-9a82-02943a6b38bf-kube-api-access-pw5nj\") pod \"mariadb-operator-controller-manager-67bf948998-c4j5s\" (UID: \"4c2d29a3-d017-4e76-9a82-02943a6b38bf\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.408290 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xj5\" (UniqueName: \"kubernetes.io/projected/053a2790-370f-44bd-a2c0-603ffb22ed3c-kube-api-access-z6xj5\") pod \"neutron-operator-controller-manager-585dbc889-9zqmc\" (UID: \"053a2790-370f-44bd-a2c0-603ffb22ed3c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.408384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdh8t\" (UniqueName: \"kubernetes.io/projected/70336809-8231-4ed9-a912-8b668aaa53bb-kube-api-access-bdh8t\") pod \"manila-operator-controller-manager-7dd968899f-zpgq2\" (UID: \"70336809-8231-4ed9-a912-8b668aaa53bb\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.408421 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvgcm\" (UniqueName: \"kubernetes.io/projected/c3cbde0f-6b5d-47cf-93e6-3d2e12051aba-kube-api-access-tvgcm\") pod \"nova-operator-controller-manager-55bff696bd-dxcgn\" (UID: \"c3cbde0f-6b5d-47cf-93e6-3d2e12051aba\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.408487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2w54\" (UniqueName: \"kubernetes.io/projected/10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0-kube-api-access-j2w54\") pod \"ironic-operator-controller-manager-5f4b8bd54d-77z62\" (UID: \"10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.408518 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5skg\" (UniqueName: \"kubernetes.io/projected/67107e9f-cf09-4d35-af26-c77f4d76083a-kube-api-access-h5skg\") pod \"keystone-operator-controller-manager-84f48565d4-kwr4n\" (UID: \"67107e9f-cf09-4d35-af26-c77f4d76083a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.410892 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wv9wk" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.428889 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.433936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2w54\" (UniqueName: \"kubernetes.io/projected/10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0-kube-api-access-j2w54\") pod \"ironic-operator-controller-manager-5f4b8bd54d-77z62\" (UID: \"10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.434968 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw5nj\" (UniqueName: \"kubernetes.io/projected/4c2d29a3-d017-4e76-9a82-02943a6b38bf-kube-api-access-pw5nj\") pod \"mariadb-operator-controller-manager-67bf948998-c4j5s\" (UID: \"4c2d29a3-d017-4e76-9a82-02943a6b38bf\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.435052 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.436301 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.439458 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.439795 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9xgxp" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.443808 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdh8t\" (UniqueName: \"kubernetes.io/projected/70336809-8231-4ed9-a912-8b668aaa53bb-kube-api-access-bdh8t\") pod \"manila-operator-controller-manager-7dd968899f-zpgq2\" (UID: \"70336809-8231-4ed9-a912-8b668aaa53bb\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.454249 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5skg\" (UniqueName: \"kubernetes.io/projected/67107e9f-cf09-4d35-af26-c77f4d76083a-kube-api-access-h5skg\") pod \"keystone-operator-controller-manager-84f48565d4-kwr4n\" (UID: \"67107e9f-cf09-4d35-af26-c77f4d76083a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.474905 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.487908 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.489959 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.496812 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9j7mb" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.503605 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.504866 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.522674 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.522735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzzl\" (UniqueName: \"kubernetes.io/projected/7b52b050-b925-4562-8682-693917b7899c-kube-api-access-lmzzl\") pod \"octavia-operator-controller-manager-6687f8d877-8gq2g\" (UID: \"7b52b050-b925-4562-8682-693917b7899c\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.522805 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzgb\" (UniqueName: \"kubernetes.io/projected/c2b6285c-ada4-43f6-8716-53b2afa13723-kube-api-access-nmzgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.522854 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xj5\" (UniqueName: \"kubernetes.io/projected/053a2790-370f-44bd-a2c0-603ffb22ed3c-kube-api-access-z6xj5\") pod \"neutron-operator-controller-manager-585dbc889-9zqmc\" (UID: \"053a2790-370f-44bd-a2c0-603ffb22ed3c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.522892 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvgcm\" (UniqueName: \"kubernetes.io/projected/c3cbde0f-6b5d-47cf-93e6-3d2e12051aba-kube-api-access-tvgcm\") pod \"nova-operator-controller-manager-55bff696bd-dxcgn\" (UID: \"c3cbde0f-6b5d-47cf-93e6-3d2e12051aba\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.538840 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.554138 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xj5\" (UniqueName: \"kubernetes.io/projected/053a2790-370f-44bd-a2c0-603ffb22ed3c-kube-api-access-z6xj5\") pod \"neutron-operator-controller-manager-585dbc889-9zqmc\" (UID: \"053a2790-370f-44bd-a2c0-603ffb22ed3c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.560863 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.575939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvgcm\" (UniqueName: \"kubernetes.io/projected/c3cbde0f-6b5d-47cf-93e6-3d2e12051aba-kube-api-access-tvgcm\") pod \"nova-operator-controller-manager-55bff696bd-dxcgn\" (UID: \"c3cbde0f-6b5d-47cf-93e6-3d2e12051aba\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.603048 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.625430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.625477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmzzl\" (UniqueName: \"kubernetes.io/projected/7b52b050-b925-4562-8682-693917b7899c-kube-api-access-lmzzl\") pod \"octavia-operator-controller-manager-6687f8d877-8gq2g\" (UID: \"7b52b050-b925-4562-8682-693917b7899c\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.625528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvpx\" (UniqueName: \"kubernetes.io/projected/14d9257b-94ae-4b29-b45a-403e034535d3-kube-api-access-4hvpx\") pod \"ovn-operator-controller-manager-788c46999f-xnccq\" (UID: \"14d9257b-94ae-4b29-b45a-403e034535d3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.625583 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzgb\" (UniqueName: \"kubernetes.io/projected/c2b6285c-ada4-43f6-8716-53b2afa13723-kube-api-access-nmzgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:22 crc kubenswrapper[4886]: E0129 17:00:22.627068 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:22 crc kubenswrapper[4886]: E0129 17:00:22.627115 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert podName:c2b6285c-ada4-43f6-8716-53b2afa13723 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:23.127099598 +0000 UTC m=+2306.035818870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" (UID: "c2b6285c-ada4-43f6-8716-53b2afa13723") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.652074 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.673424 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzgb\" (UniqueName: \"kubernetes.io/projected/c2b6285c-ada4-43f6-8716-53b2afa13723-kube-api-access-nmzgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.673448 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.698207 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.698548 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.706453 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.736244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvpx\" (UniqueName: \"kubernetes.io/projected/14d9257b-94ae-4b29-b45a-403e034535d3-kube-api-access-4hvpx\") pod \"ovn-operator-controller-manager-788c46999f-xnccq\" (UID: \"14d9257b-94ae-4b29-b45a-403e034535d3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.737966 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmzzl\" (UniqueName: \"kubernetes.io/projected/7b52b050-b925-4562-8682-693917b7899c-kube-api-access-lmzzl\") pod \"octavia-operator-controller-manager-6687f8d877-8gq2g\" (UID: \"7b52b050-b925-4562-8682-693917b7899c\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.738184 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.745697 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.747195 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.750411 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dhtns" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.752507 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.757856 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.777270 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jxfvf" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.790740 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.794984 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.796221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvpx\" (UniqueName: \"kubernetes.io/projected/14d9257b-94ae-4b29-b45a-403e034535d3-kube-api-access-4hvpx\") pod \"ovn-operator-controller-manager-788c46999f-xnccq\" (UID: \"14d9257b-94ae-4b29-b45a-403e034535d3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.827962 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.839681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fvq\" (UniqueName: \"kubernetes.io/projected/53042ed9-d676-4bb4-bf7b-9b3520aafd12-kube-api-access-s8fvq\") pod \"placement-operator-controller-manager-5b964cf4cd-xt9wq\" (UID: \"53042ed9-d676-4bb4-bf7b-9b3520aafd12\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.839741 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.839796 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpd7b\" (UniqueName: \"kubernetes.io/projected/608c459b-5b47-478a-9e3a-d83d935ae7c7-kube-api-access-tpd7b\") pod \"swift-operator-controller-manager-68fc8c869-cmfj2\" (UID: \"608c459b-5b47-478a-9e3a-d83d935ae7c7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:00:22 crc kubenswrapper[4886]: E0129 17:00:22.840452 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:22 crc kubenswrapper[4886]: E0129 17:00:22.840497 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert podName:f2898e34-e423-4576-a765-3919510dcd85 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:23.840482126 +0000 UTC m=+2306.749201398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert") pod "infra-operator-controller-manager-79955696d6-t5n28" (UID: "f2898e34-e423-4576-a765-3919510dcd85") : secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.850403 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.852180 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.854423 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dst5g" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.865947 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.899009 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.900017 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.906363 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mz9qx" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.921583 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.924020 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.942046 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xnrxl"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.942879 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5c9r\" (UniqueName: \"kubernetes.io/projected/cbfeb105-c5ee-408e-aac9-e4128e58f0e3-kube-api-access-p5c9r\") pod \"test-operator-controller-manager-56f8bfcd9f-hf95f\" (UID: \"cbfeb105-c5ee-408e-aac9-e4128e58f0e3\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.942912 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgmv\" (UniqueName: \"kubernetes.io/projected/7db85474-4c59-4db6-ab4a-51092ebd5c62-kube-api-access-wcgmv\") pod \"telemetry-operator-controller-manager-75495fd598-2hpj4\" (UID: \"7db85474-4c59-4db6-ab4a-51092ebd5c62\") " pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.942954 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fvq\" (UniqueName: \"kubernetes.io/projected/53042ed9-d676-4bb4-bf7b-9b3520aafd12-kube-api-access-s8fvq\") pod \"placement-operator-controller-manager-5b964cf4cd-xt9wq\" (UID: \"53042ed9-d676-4bb4-bf7b-9b3520aafd12\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.943021 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpd7b\" (UniqueName: \"kubernetes.io/projected/608c459b-5b47-478a-9e3a-d83d935ae7c7-kube-api-access-tpd7b\") pod \"swift-operator-controller-manager-68fc8c869-cmfj2\" (UID: \"608c459b-5b47-478a-9e3a-d83d935ae7c7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.943442 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.949931 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ztslw" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.951131 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xnrxl"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.976492 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.976936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpd7b\" (UniqueName: \"kubernetes.io/projected/608c459b-5b47-478a-9e3a-d83d935ae7c7-kube-api-access-tpd7b\") pod \"swift-operator-controller-manager-68fc8c869-cmfj2\" (UID: \"608c459b-5b47-478a-9e3a-d83d935ae7c7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.985194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.985814 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4"] Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.989923 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.991581 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-q9gr7" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.991676 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.996982 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 17:00:22 crc kubenswrapper[4886]: I0129 17:00:22.998508 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fvq\" (UniqueName: \"kubernetes.io/projected/53042ed9-d676-4bb4-bf7b-9b3520aafd12-kube-api-access-s8fvq\") pod \"placement-operator-controller-manager-5b964cf4cd-xt9wq\" (UID: \"53042ed9-d676-4bb4-bf7b-9b3520aafd12\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.000567 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9"] Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.001988 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.003892 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kpzmg" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.013419 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9"] Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.045007 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6m8\" (UniqueName: \"kubernetes.io/projected/037bf2ff-dd50-4d62-a525-5304c088cbc0-kube-api-access-5k6m8\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.045423 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kll8v\" (UniqueName: \"kubernetes.io/projected/165231a4-c627-484b-9aab-b4ce3feafe7e-kube-api-access-kll8v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ffdr9\" (UID: \"165231a4-c627-484b-9aab-b4ce3feafe7e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.046553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.046731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5c9r\" (UniqueName: \"kubernetes.io/projected/cbfeb105-c5ee-408e-aac9-e4128e58f0e3-kube-api-access-p5c9r\") pod \"test-operator-controller-manager-56f8bfcd9f-hf95f\" (UID: \"cbfeb105-c5ee-408e-aac9-e4128e58f0e3\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.046806 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgmv\" (UniqueName: \"kubernetes.io/projected/7db85474-4c59-4db6-ab4a-51092ebd5c62-kube-api-access-wcgmv\") pod \"telemetry-operator-controller-manager-75495fd598-2hpj4\" (UID: \"7db85474-4c59-4db6-ab4a-51092ebd5c62\") " pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.046906 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv82w\" (UniqueName: \"kubernetes.io/projected/6a145dac-4d02-493c-9bd8-2f9652fcb1d1-kube-api-access-kv82w\") pod \"watcher-operator-controller-manager-564965969-xnrxl\" (UID: \"6a145dac-4d02-493c-9bd8-2f9652fcb1d1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.047003 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.087696 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgmv\" (UniqueName: \"kubernetes.io/projected/7db85474-4c59-4db6-ab4a-51092ebd5c62-kube-api-access-wcgmv\") pod \"telemetry-operator-controller-manager-75495fd598-2hpj4\" (UID: \"7db85474-4c59-4db6-ab4a-51092ebd5c62\") " pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.088289 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5c9r\" (UniqueName: \"kubernetes.io/projected/cbfeb105-c5ee-408e-aac9-e4128e58f0e3-kube-api-access-p5c9r\") pod \"test-operator-controller-manager-56f8bfcd9f-hf95f\" (UID: \"cbfeb105-c5ee-408e-aac9-e4128e58f0e3\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.148173 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kll8v\" (UniqueName: \"kubernetes.io/projected/165231a4-c627-484b-9aab-b4ce3feafe7e-kube-api-access-kll8v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ffdr9\" (UID: \"165231a4-c627-484b-9aab-b4ce3feafe7e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.148208 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.148271 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.148335 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv82w\" (UniqueName: \"kubernetes.io/projected/6a145dac-4d02-493c-9bd8-2f9652fcb1d1-kube-api-access-kv82w\") pod \"watcher-operator-controller-manager-564965969-xnrxl\" (UID: \"6a145dac-4d02-493c-9bd8-2f9652fcb1d1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.148358 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.148422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k6m8\" (UniqueName: \"kubernetes.io/projected/037bf2ff-dd50-4d62-a525-5304c088cbc0-kube-api-access-5k6m8\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.148519 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.148589 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert podName:c2b6285c-ada4-43f6-8716-53b2afa13723 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:24.148567056 +0000 UTC m=+2307.057286328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" (UID: "c2b6285c-ada4-43f6-8716-53b2afa13723") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.148643 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.148671 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:23.648660819 +0000 UTC m=+2306.557380091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "metrics-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.148964 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.149012 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:23.648999168 +0000 UTC m=+2306.557718440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.168042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k6m8\" (UniqueName: \"kubernetes.io/projected/037bf2ff-dd50-4d62-a525-5304c088cbc0-kube-api-access-5k6m8\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.177166 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv82w\" (UniqueName: \"kubernetes.io/projected/6a145dac-4d02-493c-9bd8-2f9652fcb1d1-kube-api-access-kv82w\") pod \"watcher-operator-controller-manager-564965969-xnrxl\" (UID: \"6a145dac-4d02-493c-9bd8-2f9652fcb1d1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.177281 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kll8v\" (UniqueName: \"kubernetes.io/projected/165231a4-c627-484b-9aab-b4ce3feafe7e-kube-api-access-kll8v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ffdr9\" (UID: \"165231a4-c627-484b-9aab-b4ce3feafe7e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.252047 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.273802 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz"] Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.275034 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6"] Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.306637 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.341877 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.376978 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.387082 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" event={"ID":"4e16e340-e213-492a-9c93-851df7b1bddb","Type":"ContainerStarted","Data":"db35a820b3777a5851e8facf3ad0ecbcc7e64fd54a3aced1d804c9fbd5d7246a"} Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.388474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" event={"ID":"3ffc5e8b-7f7a-4585-b43d-07e2589493c9","Type":"ContainerStarted","Data":"aa4cf6ed4345267a3570795019cb8b05fcff0ac8df1c63c18bd9de1b886b8442"} Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.438297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.688061 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.688419 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.688492 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:24.688471682 +0000 UTC m=+2307.597190954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "metrics-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.688512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.688830 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.688863 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:24.688855563 +0000 UTC m=+2307.597574835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.761262 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg"] Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.881704 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c"] Jan 29 17:00:23 crc kubenswrapper[4886]: W0129 17:00:23.893118 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02decfa9_69fb_46b5_8b30_30954e39d411.slice/crio-cd904d745ca033528a23c4f23f61d4912228fb1ee06650bb508b1e3956947400 WatchSource:0}: Error finding container cd904d745ca033528a23c4f23f61d4912228fb1ee06650bb508b1e3956947400: Status 404 returned error can't find the container with id cd904d745ca033528a23c4f23f61d4912228fb1ee06650bb508b1e3956947400 Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.894114 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.894384 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: E0129 17:00:23.894430 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert podName:f2898e34-e423-4576-a765-3919510dcd85 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:25.894415491 +0000 UTC m=+2308.803134763 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert") pod "infra-operator-controller-manager-79955696d6-t5n28" (UID: "f2898e34-e423-4576-a765-3919510dcd85") : secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:23 crc kubenswrapper[4886]: I0129 17:00:23.913959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:23.923148 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.081744 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.089224 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.095303 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.101585 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.202252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:24.202992 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:24.203050 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert podName:c2b6285c-ada4-43f6-8716-53b2afa13723 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:26.203031856 +0000 UTC m=+2309.111751128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" (UID: "c2b6285c-ada4-43f6-8716-53b2afa13723") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.413512 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" event={"ID":"02decfa9-69fb-46b5-8b30-30954e39d411","Type":"ContainerStarted","Data":"cd904d745ca033528a23c4f23f61d4912228fb1ee06650bb508b1e3956947400"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.418715 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" event={"ID":"81b8c703-d895-41ce-8ca3-99fd6b6eecb6","Type":"ContainerStarted","Data":"ce76bb90ce03e73f284ea82f03e266ccc7338861c7bba9795e175aac0b53dd31"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.421394 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" event={"ID":"4c2d29a3-d017-4e76-9a82-02943a6b38bf","Type":"ContainerStarted","Data":"639db0c86c876ac827c93387845e1cf206d6ed3fed2f43d1aa8357fade4d598f"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.430755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" event={"ID":"70336809-8231-4ed9-a912-8b668aaa53bb","Type":"ContainerStarted","Data":"f7d0ef1be0b7b9e5f87a6132728c16798b8e3959eb5b9c22272745a6c4006e53"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.440124 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" event={"ID":"3c56c53e-a292-4e75-b069-c1d06ceeb6c5","Type":"ContainerStarted","Data":"50d9d3ab14eb99b279b44d5ab3871b022f40d978e61529529c73987e5e7fdba4"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.445272 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" event={"ID":"c3cbde0f-6b5d-47cf-93e6-3d2e12051aba","Type":"ContainerStarted","Data":"695cc7e2d3be4658991cc89e25b5ee6e17aa1ad185177021e3410bd48a560eb1"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.446934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" event={"ID":"d01e417c-a1b0-445d-83eb-f3c21a492138","Type":"ContainerStarted","Data":"548311bf15facc7ee9df41358726597c099c65c3d7f5e56b972cdfbe9d03afb4"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.447965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" event={"ID":"10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0","Type":"ContainerStarted","Data":"65e1ef351905936df764e2e04cb24981be76e4325012871394a549d5a5d20b54"} Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.713730 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:24.713929 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:24.714008 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:26.713989198 +0000 UTC m=+2309.622708470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "metrics-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:24.714422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:24.714549 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:24.714607 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:26.714583705 +0000 UTC m=+2309.623303057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "webhook-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.727952 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.735567 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.749972 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.768446 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.784458 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq"] Jan 29 17:00:25 crc kubenswrapper[4886]: W0129 17:00:25.795195 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b52b050_b925_4562_8682_693917b7899c.slice/crio-7db470ef18fc217f81e53aed0aa7446ba74a4fc7176fb2d9b9bcd53bbc32d938 WatchSource:0}: Error finding container 7db470ef18fc217f81e53aed0aa7446ba74a4fc7176fb2d9b9bcd53bbc32d938: Status 404 returned error can't find the container with id 7db470ef18fc217f81e53aed0aa7446ba74a4fc7176fb2d9b9bcd53bbc32d938 Jan 29 17:00:25 crc kubenswrapper[4886]: W0129 17:00:25.796225 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608c459b_5b47_478a_9e3a_d83d935ae7c7.slice/crio-37cbc13cfbafce0376297b208d259de8971217c0e71b19bb26439a7bfd3d08a9 WatchSource:0}: Error finding container 37cbc13cfbafce0376297b208d259de8971217c0e71b19bb26439a7bfd3d08a9: Status 404 returned error can't find the container with id 37cbc13cfbafce0376297b208d259de8971217c0e71b19bb26439a7bfd3d08a9 Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.800237 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n"] Jan 29 17:00:25 crc kubenswrapper[4886]: W0129 17:00:25.830723 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67107e9f_cf09_4d35_af26_c77f4d76083a.slice/crio-3ef2f289f5e872f27a84dd96f4882804758947fa5161cd292896e231f3b64b0f WatchSource:0}: Error finding container 3ef2f289f5e872f27a84dd96f4882804758947fa5161cd292896e231f3b64b0f: Status 404 returned error can't find the container with id 3ef2f289f5e872f27a84dd96f4882804758947fa5161cd292896e231f3b64b0f Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.857703 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc"] Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.953077 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:25.953315 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: E0129 17:00:25.953386 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert podName:f2898e34-e423-4576-a765-3919510dcd85 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:29.953371245 +0000 UTC m=+2312.862090507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert") pod "infra-operator-controller-manager-79955696d6-t5n28" (UID: "f2898e34-e423-4576-a765-3919510dcd85") : secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:25 crc kubenswrapper[4886]: I0129 17:00:25.992615 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4"] Jan 29 17:00:26 crc kubenswrapper[4886]: W0129 17:00:26.003344 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db85474_4c59_4db6_ab4a_51092ebd5c62.slice/crio-a410da94c921ce1ac560d29e5bb238702fb864ac3487b73f8e87335e2267b61f WatchSource:0}: Error finding container a410da94c921ce1ac560d29e5bb238702fb864ac3487b73f8e87335e2267b61f: Status 404 returned error can't find the container with id a410da94c921ce1ac560d29e5bb238702fb864ac3487b73f8e87335e2267b61f Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.024269 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xnrxl"] Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.044984 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9"] Jan 29 17:00:26 crc kubenswrapper[4886]: W0129 17:00:26.054586 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a145dac_4d02_493c_9bd8_2f9652fcb1d1.slice/crio-8d42ceeb5d9bf64a7bed2661af6e701d19abe001843d00ab378a51f2b9af96b1 WatchSource:0}: Error finding container 8d42ceeb5d9bf64a7bed2661af6e701d19abe001843d00ab378a51f2b9af96b1: Status 404 returned error can't find the container with id 8d42ceeb5d9bf64a7bed2661af6e701d19abe001843d00ab378a51f2b9af96b1 Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.260421 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:26 crc kubenswrapper[4886]: E0129 17:00:26.260649 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:26 crc kubenswrapper[4886]: E0129 17:00:26.260700 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert podName:c2b6285c-ada4-43f6-8716-53b2afa13723 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:30.260686463 +0000 UTC m=+2313.169405735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" (UID: "c2b6285c-ada4-43f6-8716-53b2afa13723") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.473424 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" event={"ID":"14d9257b-94ae-4b29-b45a-403e034535d3","Type":"ContainerStarted","Data":"cf6b440152efb9317aca275b6d58dd2b7b288c79058354a01453a7dd476218ea"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.475978 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" event={"ID":"53042ed9-d676-4bb4-bf7b-9b3520aafd12","Type":"ContainerStarted","Data":"aa01ab8a81f918d3213c672f3a8af891e78314708981db6eb9e6c82dc62026ba"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.484501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" event={"ID":"165231a4-c627-484b-9aab-b4ce3feafe7e","Type":"ContainerStarted","Data":"84c9a06b3d91b965b076c1dc5be61e2fa359472b876e80f6a30ddbd9fbf15160"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.486302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" event={"ID":"053a2790-370f-44bd-a2c0-603ffb22ed3c","Type":"ContainerStarted","Data":"01edb524f0eecfe97b6696ff1f08b05f06a7d381aeae5df2ddf1a0620edc11c1"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.489155 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" event={"ID":"7b52b050-b925-4562-8682-693917b7899c","Type":"ContainerStarted","Data":"7db470ef18fc217f81e53aed0aa7446ba74a4fc7176fb2d9b9bcd53bbc32d938"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.491257 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" event={"ID":"cbfeb105-c5ee-408e-aac9-e4128e58f0e3","Type":"ContainerStarted","Data":"6992181f56c9dc20f7f0af22476858a99d0fe8af4d0c19429a6eaad302e469cc"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.492593 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" event={"ID":"6a145dac-4d02-493c-9bd8-2f9652fcb1d1","Type":"ContainerStarted","Data":"8d42ceeb5d9bf64a7bed2661af6e701d19abe001843d00ab378a51f2b9af96b1"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.494346 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" event={"ID":"7db85474-4c59-4db6-ab4a-51092ebd5c62","Type":"ContainerStarted","Data":"a410da94c921ce1ac560d29e5bb238702fb864ac3487b73f8e87335e2267b61f"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.495683 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" event={"ID":"67107e9f-cf09-4d35-af26-c77f4d76083a","Type":"ContainerStarted","Data":"3ef2f289f5e872f27a84dd96f4882804758947fa5161cd292896e231f3b64b0f"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.496717 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" event={"ID":"608c459b-5b47-478a-9e3a-d83d935ae7c7","Type":"ContainerStarted","Data":"37cbc13cfbafce0376297b208d259de8971217c0e71b19bb26439a7bfd3d08a9"} Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.769440 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:26 crc kubenswrapper[4886]: I0129 17:00:26.769629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:26 crc kubenswrapper[4886]: E0129 17:00:26.769636 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 17:00:26 crc kubenswrapper[4886]: E0129 17:00:26.769735 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:30.769696531 +0000 UTC m=+2313.678415803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "webhook-server-cert" not found Jan 29 17:00:26 crc kubenswrapper[4886]: E0129 17:00:26.769846 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 17:00:26 crc kubenswrapper[4886]: E0129 17:00:26.770302 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:30.770293118 +0000 UTC m=+2313.679012390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "metrics-server-cert" not found Jan 29 17:00:29 crc kubenswrapper[4886]: I0129 17:00:29.660408 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:00:29 crc kubenswrapper[4886]: I0129 17:00:29.660764 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:00:29 crc kubenswrapper[4886]: I0129 17:00:29.660833 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:00:29 crc kubenswrapper[4886]: I0129 17:00:29.661609 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:00:29 crc kubenswrapper[4886]: I0129 17:00:29.661662 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" gracePeriod=600 Jan 29 17:00:30 crc kubenswrapper[4886]: I0129 17:00:30.030642 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.030789 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.030919 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert podName:f2898e34-e423-4576-a765-3919510dcd85 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:38.030904979 +0000 UTC m=+2320.939624251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert") pod "infra-operator-controller-manager-79955696d6-t5n28" (UID: "f2898e34-e423-4576-a765-3919510dcd85") : secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: I0129 17:00:30.335369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.335557 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.335667 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert podName:c2b6285c-ada4-43f6-8716-53b2afa13723 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:38.335640775 +0000 UTC m=+2321.244360097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" (UID: "c2b6285c-ada4-43f6-8716-53b2afa13723") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: I0129 17:00:30.846042 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:30 crc kubenswrapper[4886]: I0129 17:00:30.846232 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.846400 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.846421 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.846583 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:38.846540566 +0000 UTC m=+2321.755259988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "metrics-server-cert" not found Jan 29 17:00:30 crc kubenswrapper[4886]: E0129 17:00:30.846684 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:38.846661029 +0000 UTC m=+2321.755380301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "webhook-server-cert" not found Jan 29 17:00:32 crc kubenswrapper[4886]: I0129 17:00:32.558121 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" exitCode=0 Jan 29 17:00:32 crc kubenswrapper[4886]: I0129 17:00:32.558165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc"} Jan 29 17:00:32 crc kubenswrapper[4886]: I0129 17:00:32.559512 4886 scope.go:117] "RemoveContainer" containerID="8ef97582eea2927ab131d16b422621b32afa666846864a223a782bc24fb0ddda" Jan 29 17:00:36 crc kubenswrapper[4886]: E0129 17:00:36.032249 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage611160604/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 29 17:00:36 crc kubenswrapper[4886]: E0129 17:00:36.032969 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rxpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-rhxnz_openstack-operators(d01e417c-a1b0-445d-83eb-f3c21a492138): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage611160604/1\": happened during read: context canceled" logger="UnhandledError" Jan 29 17:00:36 crc kubenswrapper[4886]: E0129 17:00:36.034376 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage611160604/1\\\": happened during read: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" podUID="d01e417c-a1b0-445d-83eb-f3c21a492138" Jan 29 17:00:36 crc kubenswrapper[4886]: E0129 17:00:36.601952 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" podUID="d01e417c-a1b0-445d-83eb-f3c21a492138" Jan 29 17:00:38 crc kubenswrapper[4886]: I0129 17:00:38.071137 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.071338 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.071651 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert podName:f2898e34-e423-4576-a765-3919510dcd85 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:54.071631731 +0000 UTC m=+2336.980351013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert") pod "infra-operator-controller-manager-79955696d6-t5n28" (UID: "f2898e34-e423-4576-a765-3919510dcd85") : secret "infra-operator-webhook-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: I0129 17:00:38.378147 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.378702 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.378879 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert podName:c2b6285c-ada4-43f6-8716-53b2afa13723 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:54.378855857 +0000 UTC m=+2337.287575139 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" (UID: "c2b6285c-ada4-43f6-8716-53b2afa13723") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: I0129 17:00:38.887334 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:38 crc kubenswrapper[4886]: I0129 17:00:38.887494 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.887588 4886 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.887665 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:54.887646639 +0000 UTC m=+2337.796365911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "metrics-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.887682 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 17:00:38 crc kubenswrapper[4886]: E0129 17:00:38.887731 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs podName:037bf2ff-dd50-4d62-a525-5304c088cbc0 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:54.88771622 +0000 UTC m=+2337.796435492 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs") pod "openstack-operator-controller-manager-546c7b8b6d-hngs4" (UID: "037bf2ff-dd50-4d62-a525-5304c088cbc0") : secret "webhook-server-cert" not found Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.023495 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.023682 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-244l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-w6qc6_openstack-operators(4e16e340-e213-492a-9c93-851df7b1bddb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.024863 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" podUID="4e16e340-e213-492a-9c93-851df7b1bddb" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.457333 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.457525 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjlpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-2g2cz_openstack-operators(3ffc5e8b-7f7a-4585-b43d-07e2589493c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.459387 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" podUID="3ffc5e8b-7f7a-4585-b43d-07e2589493c9" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.638527 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" podUID="3ffc5e8b-7f7a-4585-b43d-07e2589493c9" Jan 29 17:00:40 crc kubenswrapper[4886]: E0129 17:00:40.638750 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" podUID="4e16e340-e213-492a-9c93-851df7b1bddb" Jan 29 17:00:46 crc kubenswrapper[4886]: E0129 17:00:46.415508 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 17:00:46 crc kubenswrapper[4886]: E0129 17:00:46.416151 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kll8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ffdr9_openstack-operators(165231a4-c627-484b-9aab-b4ce3feafe7e): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" logger="UnhandledError" Jan 29 17:00:46 crc kubenswrapper[4886]: E0129 17:00:46.417773 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \\\"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\\\": context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" podUID="165231a4-c627-484b-9aab-b4ce3feafe7e" Jan 29 17:00:46 crc kubenswrapper[4886]: E0129 17:00:46.727884 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" podUID="165231a4-c627-484b-9aab-b4ce3feafe7e" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.291494 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.291681 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bdh8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-zpgq2_openstack-operators(70336809-8231-4ed9-a912-8b668aaa53bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.293226 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" podUID="70336809-8231-4ed9-a912-8b668aaa53bb" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.749896 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" podUID="70336809-8231-4ed9-a912-8b668aaa53bb" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.848410 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.848600 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hvpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-xnccq_openstack-operators(14d9257b-94ae-4b29-b45a-403e034535d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:00:47 crc kubenswrapper[4886]: E0129 17:00:47.849785 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" podUID="14d9257b-94ae-4b29-b45a-403e034535d3" Jan 29 17:00:48 crc kubenswrapper[4886]: E0129 17:00:48.553723 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:00:48 crc kubenswrapper[4886]: I0129 17:00:48.753970 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:00:48 crc kubenswrapper[4886]: E0129 17:00:48.754225 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:00:48 crc kubenswrapper[4886]: E0129 17:00:48.862077 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" podUID="14d9257b-94ae-4b29-b45a-403e034535d3" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.074969 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.081923 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2898e34-e423-4576-a765-3919510dcd85-cert\") pod \"infra-operator-controller-manager-79955696d6-t5n28\" (UID: \"f2898e34-e423-4576-a765-3919510dcd85\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.362820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.381911 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.385778 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2b6285c-ada4-43f6-8716-53b2afa13723-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh\" (UID: \"c2b6285c-ada4-43f6-8716-53b2afa13723\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.406276 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.895368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.895784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.902356 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-webhook-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.903543 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/037bf2ff-dd50-4d62-a525-5304c088cbc0-metrics-certs\") pod \"openstack-operator-controller-manager-546c7b8b6d-hngs4\" (UID: \"037bf2ff-dd50-4d62-a525-5304c088cbc0\") " pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:54 crc kubenswrapper[4886]: I0129 17:00:54.917360 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:00:57 crc kubenswrapper[4886]: E0129 17:00:57.071493 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 17:00:57 crc kubenswrapper[4886]: E0129 17:00:57.072060 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59v5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-qf2xg_openstack-operators(3c56c53e-a292-4e75-b069-c1d06ceeb6c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:00:57 crc kubenswrapper[4886]: E0129 17:00:57.073214 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" podUID="3c56c53e-a292-4e75-b069-c1d06ceeb6c5" Jan 29 17:00:57 crc kubenswrapper[4886]: E0129 17:00:57.841180 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" podUID="3c56c53e-a292-4e75-b069-c1d06ceeb6c5" Jan 29 17:01:00 crc kubenswrapper[4886]: I0129 17:01:00.032273 4886 scope.go:117] "RemoveContainer" containerID="e24030b3765055e623ca669573f5fe2306c10abdab283e014f331f200998a684" Jan 29 17:01:00 crc kubenswrapper[4886]: E0129 17:01:00.058225 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 17:01:00 crc kubenswrapper[4886]: E0129 17:01:00.058418 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8fvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-xt9wq_openstack-operators(53042ed9-d676-4bb4-bf7b-9b3520aafd12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:00 crc kubenswrapper[4886]: E0129 17:01:00.059737 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" podUID="53042ed9-d676-4bb4-bf7b-9b3520aafd12" Jan 29 17:01:00 crc kubenswrapper[4886]: E0129 17:01:00.864071 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" podUID="53042ed9-d676-4bb4-bf7b-9b3520aafd12" Jan 29 17:01:02 crc kubenswrapper[4886]: E0129 17:01:02.101540 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 17:01:02 crc kubenswrapper[4886]: E0129 17:01:02.101989 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4t2vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-4mmm8_openstack-operators(81b8c703-d895-41ce-8ca3-99fd6b6eecb6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:02 crc kubenswrapper[4886]: E0129 17:01:02.103599 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" podUID="81b8c703-d895-41ce-8ca3-99fd6b6eecb6" Jan 29 17:01:02 crc kubenswrapper[4886]: I0129 17:01:02.614808 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:01:02 crc kubenswrapper[4886]: E0129 17:01:02.615064 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:01:03 crc kubenswrapper[4886]: E0129 17:01:03.132984 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 29 17:01:03 crc kubenswrapper[4886]: E0129 17:01:03.133161 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wckhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-pfw9c_openstack-operators(02decfa9-69fb-46b5-8b30-30954e39d411): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:03 crc kubenswrapper[4886]: E0129 17:01:03.135266 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" podUID="02decfa9-69fb-46b5-8b30-30954e39d411" Jan 29 17:01:03 crc kubenswrapper[4886]: E0129 17:01:03.411164 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" podUID="81b8c703-d895-41ce-8ca3-99fd6b6eecb6" Jan 29 17:01:03 crc kubenswrapper[4886]: E0129 17:01:03.888665 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" podUID="02decfa9-69fb-46b5-8b30-30954e39d411" Jan 29 17:01:06 crc kubenswrapper[4886]: E0129 17:01:06.709849 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 29 17:01:06 crc kubenswrapper[4886]: E0129 17:01:06.711186 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kv82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-xnrxl_openstack-operators(6a145dac-4d02-493c-9bd8-2f9652fcb1d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:06 crc kubenswrapper[4886]: E0129 17:01:06.712534 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" podUID="6a145dac-4d02-493c-9bd8-2f9652fcb1d1" Jan 29 17:01:06 crc kubenswrapper[4886]: E0129 17:01:06.930025 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" podUID="6a145dac-4d02-493c-9bd8-2f9652fcb1d1" Jan 29 17:01:07 crc kubenswrapper[4886]: E0129 17:01:07.810891 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 17:01:07 crc kubenswrapper[4886]: E0129 17:01:07.811439 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tpd7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-cmfj2_openstack-operators(608c459b-5b47-478a-9e3a-d83d935ae7c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:07 crc kubenswrapper[4886]: E0129 17:01:07.812706 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" podUID="608c459b-5b47-478a-9e3a-d83d935ae7c7" Jan 29 17:01:07 crc kubenswrapper[4886]: E0129 17:01:07.917404 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" podUID="608c459b-5b47-478a-9e3a-d83d935ae7c7" Jan 29 17:01:13 crc kubenswrapper[4886]: I0129 17:01:13.616227 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:01:13 crc kubenswrapper[4886]: E0129 17:01:13.617189 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:01:14 crc kubenswrapper[4886]: E0129 17:01:14.313730 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 29 17:01:14 crc kubenswrapper[4886]: E0129 17:01:14.313947 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6xj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-9zqmc_openstack-operators(053a2790-370f-44bd-a2c0-603ffb22ed3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:14 crc kubenswrapper[4886]: E0129 17:01:14.315121 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" podUID="053a2790-370f-44bd-a2c0-603ffb22ed3c" Jan 29 17:01:14 crc kubenswrapper[4886]: E0129 17:01:14.975303 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" podUID="053a2790-370f-44bd-a2c0-603ffb22ed3c" Jan 29 17:01:15 crc kubenswrapper[4886]: E0129 17:01:15.583103 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 29 17:01:15 crc kubenswrapper[4886]: E0129 17:01:15.583303 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lmzzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-8gq2g_openstack-operators(7b52b050-b925-4562-8682-693917b7899c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:15 crc kubenswrapper[4886]: E0129 17:01:15.584606 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" podUID="7b52b050-b925-4562-8682-693917b7899c" Jan 29 17:01:15 crc kubenswrapper[4886]: E0129 17:01:15.984731 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" podUID="7b52b050-b925-4562-8682-693917b7899c" Jan 29 17:01:16 crc kubenswrapper[4886]: E0129 17:01:16.742897 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 17:01:16 crc kubenswrapper[4886]: E0129 17:01:16.743106 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bdh8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-zpgq2_openstack-operators(70336809-8231-4ed9-a912-8b668aaa53bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:16 crc kubenswrapper[4886]: E0129 17:01:16.744372 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" podUID="70336809-8231-4ed9-a912-8b668aaa53bb" Jan 29 17:01:16 crc kubenswrapper[4886]: E0129 17:01:16.751570 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 17:01:16 crc kubenswrapper[4886]: E0129 17:01:16.751739 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5c9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-hf95f_openstack-operators(cbfeb105-c5ee-408e-aac9-e4128e58f0e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:16 crc kubenswrapper[4886]: E0129 17:01:16.752948 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" podUID="cbfeb105-c5ee-408e-aac9-e4128e58f0e3" Jan 29 17:01:17 crc kubenswrapper[4886]: E0129 17:01:16.992211 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" podUID="cbfeb105-c5ee-408e-aac9-e4128e58f0e3" Jan 29 17:01:17 crc kubenswrapper[4886]: E0129 17:01:17.682949 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 17:01:17 crc kubenswrapper[4886]: E0129 17:01:17.683149 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pw5nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-c4j5s_openstack-operators(4c2d29a3-d017-4e76-9a82-02943a6b38bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:17 crc kubenswrapper[4886]: E0129 17:01:17.684339 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" podUID="4c2d29a3-d017-4e76-9a82-02943a6b38bf" Jan 29 17:01:17 crc kubenswrapper[4886]: E0129 17:01:17.997974 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" podUID="4c2d29a3-d017-4e76-9a82-02943a6b38bf" Jan 29 17:01:18 crc kubenswrapper[4886]: E0129 17:01:18.529598 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 29 17:01:18 crc kubenswrapper[4886]: E0129 17:01:18.530479 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvgcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-dxcgn_openstack-operators(c3cbde0f-6b5d-47cf-93e6-3d2e12051aba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:18 crc kubenswrapper[4886]: E0129 17:01:18.532534 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" podUID="c3cbde0f-6b5d-47cf-93e6-3d2e12051aba" Jan 29 17:01:19 crc kubenswrapper[4886]: E0129 17:01:19.010898 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" podUID="c3cbde0f-6b5d-47cf-93e6-3d2e12051aba" Jan 29 17:01:19 crc kubenswrapper[4886]: E0129 17:01:19.761877 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:0e065ec457961704e9d1c504e4175b5fe8df623e" Jan 29 17:01:19 crc kubenswrapper[4886]: E0129 17:01:19.761937 4886 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:0e065ec457961704e9d1c504e4175b5fe8df623e" Jan 29 17:01:19 crc kubenswrapper[4886]: E0129 17:01:19.762107 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:0e065ec457961704e9d1c504e4175b5fe8df623e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcgmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-75495fd598-2hpj4_openstack-operators(7db85474-4c59-4db6-ab4a-51092ebd5c62): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:19 crc kubenswrapper[4886]: E0129 17:01:19.763408 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" podUID="7db85474-4c59-4db6-ab4a-51092ebd5c62" Jan 29 17:01:20 crc kubenswrapper[4886]: E0129 17:01:20.024976 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:0e065ec457961704e9d1c504e4175b5fe8df623e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" podUID="7db85474-4c59-4db6-ab4a-51092ebd5c62" Jan 29 17:01:21 crc kubenswrapper[4886]: E0129 17:01:21.018740 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 29 17:01:21 crc kubenswrapper[4886]: E0129 17:01:21.018949 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rxpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-rhxnz_openstack-operators(d01e417c-a1b0-445d-83eb-f3c21a492138): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:21 crc kubenswrapper[4886]: E0129 17:01:21.020178 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" podUID="d01e417c-a1b0-445d-83eb-f3c21a492138" Jan 29 17:01:22 crc kubenswrapper[4886]: E0129 17:01:22.882313 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 17:01:22 crc kubenswrapper[4886]: E0129 17:01:22.883027 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5skg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-kwr4n_openstack-operators(67107e9f-cf09-4d35-af26-c77f4d76083a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:22 crc kubenswrapper[4886]: E0129 17:01:22.884233 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" podUID="67107e9f-cf09-4d35-af26-c77f4d76083a" Jan 29 17:01:23 crc kubenswrapper[4886]: E0129 17:01:23.840359 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" podUID="67107e9f-cf09-4d35-af26-c77f4d76083a" Jan 29 17:01:25 crc kubenswrapper[4886]: I0129 17:01:25.615552 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:01:25 crc kubenswrapper[4886]: E0129 17:01:25.616427 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:01:27 crc kubenswrapper[4886]: E0129 17:01:27.848543 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 17:01:27 crc kubenswrapper[4886]: E0129 17:01:27.849110 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kll8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ffdr9_openstack-operators(165231a4-c627-484b-9aab-b4ce3feafe7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:01:27 crc kubenswrapper[4886]: E0129 17:01:27.850719 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" podUID="165231a4-c627-484b-9aab-b4ce3feafe7e" Jan 29 17:01:28 crc kubenswrapper[4886]: I0129 17:01:28.380175 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5n28"] Jan 29 17:01:28 crc kubenswrapper[4886]: W0129 17:01:28.403441 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2898e34_e423_4576_a765_3919510dcd85.slice/crio-32bad54df0a05d379850970b4bc6fa4c00d6a1b6eec5ddf09b64a9bc7353231b WatchSource:0}: Error finding container 32bad54df0a05d379850970b4bc6fa4c00d6a1b6eec5ddf09b64a9bc7353231b: Status 404 returned error can't find the container with id 32bad54df0a05d379850970b4bc6fa4c00d6a1b6eec5ddf09b64a9bc7353231b Jan 29 17:01:28 crc kubenswrapper[4886]: I0129 17:01:28.471756 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4"] Jan 29 17:01:28 crc kubenswrapper[4886]: I0129 17:01:28.614314 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh"] Jan 29 17:01:28 crc kubenswrapper[4886]: E0129 17:01:28.647678 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" podUID="70336809-8231-4ed9-a912-8b668aaa53bb" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.107488 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" event={"ID":"53042ed9-d676-4bb4-bf7b-9b3520aafd12","Type":"ContainerStarted","Data":"08631cad71c683ae7bc93ea38f8ec2a7efbc6831d0396ac48aebe884fe6bbe1c"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.108193 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.109417 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" event={"ID":"02decfa9-69fb-46b5-8b30-30954e39d411","Type":"ContainerStarted","Data":"3ba7acc051744e2e2125dd34e8289a04c4077f3f8fb45115cbb4dd6735c52ec1"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.109655 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.111134 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" event={"ID":"608c459b-5b47-478a-9e3a-d83d935ae7c7","Type":"ContainerStarted","Data":"27f7ecc14812bb16e02a66d7d60e9cacc89d5b8c40c2ffcd19146ed9cbcb9221"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.111351 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.112755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" event={"ID":"037bf2ff-dd50-4d62-a525-5304c088cbc0","Type":"ContainerStarted","Data":"c19dc8dbddb237b0be234b39305b171ff3c8fede1daf2a27f71662844567e30c"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.112842 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" event={"ID":"037bf2ff-dd50-4d62-a525-5304c088cbc0","Type":"ContainerStarted","Data":"10c75283c7e6e3cd50f8debdbf1161ce254cb6c87ee175d8a1e5bd1d6ca877ea"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.112904 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.114410 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" event={"ID":"4e16e340-e213-492a-9c93-851df7b1bddb","Type":"ContainerStarted","Data":"529906ac788956a959a6dfa38ad9145f4e162db09f249ae9aa26a562137a393c"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.114622 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.115965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" event={"ID":"c2b6285c-ada4-43f6-8716-53b2afa13723","Type":"ContainerStarted","Data":"b0e418cb46ad17eb310f510a0a59751fbe5a247c55458b00418987b4f06bd783"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.117527 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" event={"ID":"6a145dac-4d02-493c-9bd8-2f9652fcb1d1","Type":"ContainerStarted","Data":"22f5ed753cfd8c08f1ff897163e136a0a25b32b0b9a1dbe8a68f3848234f1080"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.117778 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.119147 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" event={"ID":"14d9257b-94ae-4b29-b45a-403e034535d3","Type":"ContainerStarted","Data":"2a4a6ae6649f5fad516026d403eb47836c1f46f5b814d464817c8ac459496def"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.119366 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.120544 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" event={"ID":"10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0","Type":"ContainerStarted","Data":"8f65abf262a8949c0e08aae4a5f9b50c87e9a4fa88a2936ca60a659c65ed12cb"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.120704 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.121728 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" event={"ID":"f2898e34-e423-4576-a765-3919510dcd85","Type":"ContainerStarted","Data":"32bad54df0a05d379850970b4bc6fa4c00d6a1b6eec5ddf09b64a9bc7353231b"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.123233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" event={"ID":"81b8c703-d895-41ce-8ca3-99fd6b6eecb6","Type":"ContainerStarted","Data":"50db265bbf35a4d0586f20b78bb6755925fb4c2fcdc76b9f9b71ed13398cf4e2"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.123474 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.124945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" event={"ID":"3ffc5e8b-7f7a-4585-b43d-07e2589493c9","Type":"ContainerStarted","Data":"4583adefe889d5e4fa04809fe08e963718545c998f60131fec2ccfff152ec10b"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.125079 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.126471 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" event={"ID":"3c56c53e-a292-4e75-b069-c1d06ceeb6c5","Type":"ContainerStarted","Data":"57f0d638acef9226c1817c1099045f15651a37faa50324df6baf8fd5d16315a3"} Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.126619 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.266341 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" podStartSLOduration=5.225412994 podStartE2EDuration="1m7.266309238s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.835315177 +0000 UTC m=+2308.744034449" lastFinishedPulling="2026-01-29 17:01:27.876211421 +0000 UTC m=+2370.784930693" observedRunningTime="2026-01-29 17:01:29.265259109 +0000 UTC m=+2372.173978381" watchObservedRunningTime="2026-01-29 17:01:29.266309238 +0000 UTC m=+2372.175028510" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.334932 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" podStartSLOduration=5.497004679 podStartE2EDuration="1m7.334915526s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:26.05993329 +0000 UTC m=+2308.968652562" lastFinishedPulling="2026-01-29 17:01:27.897844137 +0000 UTC m=+2370.806563409" observedRunningTime="2026-01-29 17:01:29.332899591 +0000 UTC m=+2372.241618863" watchObservedRunningTime="2026-01-29 17:01:29.334915526 +0000 UTC m=+2372.243634798" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.474723 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" podStartSLOduration=5.925280054 podStartE2EDuration="1m8.474704266s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.341547311 +0000 UTC m=+2306.250266583" lastFinishedPulling="2026-01-29 17:01:25.890971513 +0000 UTC m=+2368.799690795" observedRunningTime="2026-01-29 17:01:29.417953763 +0000 UTC m=+2372.326673035" watchObservedRunningTime="2026-01-29 17:01:29.474704266 +0000 UTC m=+2372.383423538" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.558481 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" podStartSLOduration=5.429190629 podStartE2EDuration="1m7.558461423s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.807758062 +0000 UTC m=+2308.716477344" lastFinishedPulling="2026-01-29 17:01:27.937028866 +0000 UTC m=+2370.845748138" observedRunningTime="2026-01-29 17:01:29.556019386 +0000 UTC m=+2372.464738678" watchObservedRunningTime="2026-01-29 17:01:29.558461423 +0000 UTC m=+2372.467180695" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.560737 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" podStartSLOduration=4.597201529 podStartE2EDuration="1m8.560726105s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.929519418 +0000 UTC m=+2306.838238680" lastFinishedPulling="2026-01-29 17:01:27.893043984 +0000 UTC m=+2370.801763256" observedRunningTime="2026-01-29 17:01:29.473711069 +0000 UTC m=+2372.382430341" watchObservedRunningTime="2026-01-29 17:01:29.560726105 +0000 UTC m=+2372.469445377" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.637387 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" podStartSLOduration=67.637366376 podStartE2EDuration="1m7.637366376s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:01:29.607618557 +0000 UTC m=+2372.516337829" watchObservedRunningTime="2026-01-29 17:01:29.637366376 +0000 UTC m=+2372.546085658" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.672910 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" podStartSLOduration=6.123086582 podStartE2EDuration="1m8.672888985s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.341275733 +0000 UTC m=+2306.249995005" lastFinishedPulling="2026-01-29 17:01:25.891078096 +0000 UTC m=+2368.799797408" observedRunningTime="2026-01-29 17:01:29.636013119 +0000 UTC m=+2372.544732401" watchObservedRunningTime="2026-01-29 17:01:29.672888985 +0000 UTC m=+2372.581608257" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.679250 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" podStartSLOduration=4.687946172 podStartE2EDuration="1m8.679233869s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.901742257 +0000 UTC m=+2306.810461529" lastFinishedPulling="2026-01-29 17:01:27.893029954 +0000 UTC m=+2370.801749226" observedRunningTime="2026-01-29 17:01:29.654607381 +0000 UTC m=+2372.563326653" watchObservedRunningTime="2026-01-29 17:01:29.679233869 +0000 UTC m=+2372.587953141" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.701082 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" podStartSLOduration=4.686906142 podStartE2EDuration="1m8.701063561s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.791895589 +0000 UTC m=+2306.700614861" lastFinishedPulling="2026-01-29 17:01:27.806052988 +0000 UTC m=+2370.714772280" observedRunningTime="2026-01-29 17:01:29.69593831 +0000 UTC m=+2372.604657582" watchObservedRunningTime="2026-01-29 17:01:29.701063561 +0000 UTC m=+2372.609782833" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.723103 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" podStartSLOduration=5.724339838 podStartE2EDuration="1m7.723087767s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.807725441 +0000 UTC m=+2308.716444713" lastFinishedPulling="2026-01-29 17:01:27.80647336 +0000 UTC m=+2370.715192642" observedRunningTime="2026-01-29 17:01:29.718636805 +0000 UTC m=+2372.627356077" watchObservedRunningTime="2026-01-29 17:01:29.723087767 +0000 UTC m=+2372.631807039" Jan 29 17:01:29 crc kubenswrapper[4886]: I0129 17:01:29.743221 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" podStartSLOduration=12.300214887 podStartE2EDuration="1m7.743202161s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:24.08576721 +0000 UTC m=+2306.994486482" lastFinishedPulling="2026-01-29 17:01:19.528754484 +0000 UTC m=+2362.437473756" observedRunningTime="2026-01-29 17:01:29.735060637 +0000 UTC m=+2372.643779909" watchObservedRunningTime="2026-01-29 17:01:29.743202161 +0000 UTC m=+2372.651921433" Jan 29 17:01:33 crc kubenswrapper[4886]: I0129 17:01:33.256841 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-xt9wq" Jan 29 17:01:33 crc kubenswrapper[4886]: I0129 17:01:33.380975 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-xnrxl" Jan 29 17:01:33 crc kubenswrapper[4886]: E0129 17:01:33.616952 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" podUID="d01e417c-a1b0-445d-83eb-f3c21a492138" Jan 29 17:01:34 crc kubenswrapper[4886]: I0129 17:01:34.924011 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-546c7b8b6d-hngs4" Jan 29 17:01:39 crc kubenswrapper[4886]: I0129 17:01:39.615582 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:01:39 crc kubenswrapper[4886]: E0129 17:01:39.616440 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.246680 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" event={"ID":"c2b6285c-ada4-43f6-8716-53b2afa13723","Type":"ContainerStarted","Data":"1eeec1940f0358f8bf1517780cc09baefe598dce219e723b21d9e385c74fe04b"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.247285 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.248388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" event={"ID":"c3cbde0f-6b5d-47cf-93e6-3d2e12051aba","Type":"ContainerStarted","Data":"8e7e6c945083aad52b225a07e909c682655ca5a70c8963d43b4952ec8ca4b612"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.248589 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.251874 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" event={"ID":"f2898e34-e423-4576-a765-3919510dcd85","Type":"ContainerStarted","Data":"d6cab8e8cbc1ca14b2c6e02750c867850499ec9790828f8ee283de7d764ea83d"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.254091 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" event={"ID":"053a2790-370f-44bd-a2c0-603ffb22ed3c","Type":"ContainerStarted","Data":"1898f8e239a7d5c23d50a83a89acce63c295993247ee81f49a76afabc303731c"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.254376 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.255932 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" event={"ID":"7db85474-4c59-4db6-ab4a-51092ebd5c62","Type":"ContainerStarted","Data":"8c637077bf4d9ad051c8d079b2d61c33cfa17c707c30487dbed27b7dd2bf5baf"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.256550 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.262961 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" event={"ID":"7b52b050-b925-4562-8682-693917b7899c","Type":"ContainerStarted","Data":"96acbcf5a952263baae2b5f40a51d7232b4238dcfd6172b4c09e0687a80ea6f6"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.263744 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.271269 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" event={"ID":"4c2d29a3-d017-4e76-9a82-02943a6b38bf","Type":"ContainerStarted","Data":"428345c51b77565a0a046dbcc4a2a80cf710824db549ca179d99c2c267860cd4"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.271933 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.275365 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" event={"ID":"cbfeb105-c5ee-408e-aac9-e4128e58f0e3","Type":"ContainerStarted","Data":"e53988331fb9322ad0fc5d89fe33040c2ede7e1105074dcb85a5b0b441bfd1ef"} Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.275773 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.310280 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" podStartSLOduration=67.768428637 podStartE2EDuration="1m19.310257993s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:01:28.642007313 +0000 UTC m=+2371.550726585" lastFinishedPulling="2026-01-29 17:01:40.183836669 +0000 UTC m=+2383.092555941" observedRunningTime="2026-01-29 17:01:41.28764109 +0000 UTC m=+2384.196360362" watchObservedRunningTime="2026-01-29 17:01:41.310257993 +0000 UTC m=+2384.218977265" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.353996 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" podStartSLOduration=5.06165201 podStartE2EDuration="1m19.353976437s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.830886882 +0000 UTC m=+2308.739606154" lastFinishedPulling="2026-01-29 17:01:40.123211309 +0000 UTC m=+2383.031930581" observedRunningTime="2026-01-29 17:01:41.320040452 +0000 UTC m=+2384.228759724" watchObservedRunningTime="2026-01-29 17:01:41.353976437 +0000 UTC m=+2384.262695719" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.354327 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" podStartSLOduration=5.177890842 podStartE2EDuration="1m19.354316557s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:26.007492856 +0000 UTC m=+2308.916212128" lastFinishedPulling="2026-01-29 17:01:40.183918571 +0000 UTC m=+2383.092637843" observedRunningTime="2026-01-29 17:01:41.345078112 +0000 UTC m=+2384.253797384" watchObservedRunningTime="2026-01-29 17:01:41.354316557 +0000 UTC m=+2384.263035829" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.365013 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" podStartSLOduration=5.021819874 podStartE2EDuration="1m19.364995521s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.79558658 +0000 UTC m=+2308.704305852" lastFinishedPulling="2026-01-29 17:01:40.138762227 +0000 UTC m=+2383.047481499" observedRunningTime="2026-01-29 17:01:41.359255403 +0000 UTC m=+2384.267974685" watchObservedRunningTime="2026-01-29 17:01:41.364995521 +0000 UTC m=+2384.273714793" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.383794 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" podStartSLOduration=3.272946811 podStartE2EDuration="1m19.383777958s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:24.073782703 +0000 UTC m=+2306.982501975" lastFinishedPulling="2026-01-29 17:01:40.18461385 +0000 UTC m=+2383.093333122" observedRunningTime="2026-01-29 17:01:41.376370084 +0000 UTC m=+2384.285089356" watchObservedRunningTime="2026-01-29 17:01:41.383777958 +0000 UTC m=+2384.292497230" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.412067 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" podStartSLOduration=3.335934165 podStartE2EDuration="1m19.412049257s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:24.108265662 +0000 UTC m=+2307.016984934" lastFinishedPulling="2026-01-29 17:01:40.184380754 +0000 UTC m=+2383.093100026" observedRunningTime="2026-01-29 17:01:41.404637212 +0000 UTC m=+2384.313356484" watchObservedRunningTime="2026-01-29 17:01:41.412049257 +0000 UTC m=+2384.320768529" Jan 29 17:01:41 crc kubenswrapper[4886]: I0129 17:01:41.430883 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" podStartSLOduration=5.066415954 podStartE2EDuration="1m19.430867255s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.834925776 +0000 UTC m=+2308.743645048" lastFinishedPulling="2026-01-29 17:01:40.199377077 +0000 UTC m=+2383.108096349" observedRunningTime="2026-01-29 17:01:41.429012894 +0000 UTC m=+2384.337732166" watchObservedRunningTime="2026-01-29 17:01:41.430867255 +0000 UTC m=+2384.339586527" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.280213 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2g2cz" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.284422 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.303620 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-w6qc6" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.339007 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" podStartSLOduration=68.553083419 podStartE2EDuration="1m20.338983487s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:01:28.409705235 +0000 UTC m=+2371.318424507" lastFinishedPulling="2026-01-29 17:01:40.195605303 +0000 UTC m=+2383.104324575" observedRunningTime="2026-01-29 17:01:42.324690833 +0000 UTC m=+2385.233410115" watchObservedRunningTime="2026-01-29 17:01:42.338983487 +0000 UTC m=+2385.247702759" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.478253 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pfw9c" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.508272 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4mmm8" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.583041 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qf2xg" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.613892 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-77z62" Jan 29 17:01:42 crc kubenswrapper[4886]: E0129 17:01:42.620988 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" podUID="165231a4-c627-484b-9aab-b4ce3feafe7e" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.928100 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnccq" Jan 29 17:01:42 crc kubenswrapper[4886]: I0129 17:01:42.999073 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-cmfj2" Jan 29 17:01:43 crc kubenswrapper[4886]: I0129 17:01:43.294185 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" event={"ID":"70336809-8231-4ed9-a912-8b668aaa53bb","Type":"ContainerStarted","Data":"98ae179bdcc94a3c5aec25014bf612acbff88d1aaff9be2ab0f329e78cbb5105"} Jan 29 17:01:43 crc kubenswrapper[4886]: I0129 17:01:43.294381 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:01:43 crc kubenswrapper[4886]: I0129 17:01:43.297286 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" event={"ID":"67107e9f-cf09-4d35-af26-c77f4d76083a","Type":"ContainerStarted","Data":"d9fb8173587b39cf7aff6ed09fabb9e71bf83f66ea01fee608a23870907d7be6"} Jan 29 17:01:43 crc kubenswrapper[4886]: I0129 17:01:43.297785 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:01:43 crc kubenswrapper[4886]: I0129 17:01:43.316262 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" podStartSLOduration=3.026945847 podStartE2EDuration="1m21.316243583s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:24.112535892 +0000 UTC m=+2307.021255164" lastFinishedPulling="2026-01-29 17:01:42.401833628 +0000 UTC m=+2385.310552900" observedRunningTime="2026-01-29 17:01:43.308137089 +0000 UTC m=+2386.216856361" watchObservedRunningTime="2026-01-29 17:01:43.316243583 +0000 UTC m=+2386.224962855" Jan 29 17:01:43 crc kubenswrapper[4886]: I0129 17:01:43.330764 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" podStartSLOduration=5.661298901 podStartE2EDuration="1m21.330741622s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:25.839312729 +0000 UTC m=+2308.748032011" lastFinishedPulling="2026-01-29 17:01:41.50875546 +0000 UTC m=+2384.417474732" observedRunningTime="2026-01-29 17:01:43.323316237 +0000 UTC m=+2386.232035509" watchObservedRunningTime="2026-01-29 17:01:43.330741622 +0000 UTC m=+2386.239460904" Jan 29 17:01:49 crc kubenswrapper[4886]: I0129 17:01:49.349749 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" event={"ID":"d01e417c-a1b0-445d-83eb-f3c21a492138","Type":"ContainerStarted","Data":"e82b816aa22fa7cb8c8087a66fb3102fb0562fb86926c76b7385ee50136b1363"} Jan 29 17:01:49 crc kubenswrapper[4886]: I0129 17:01:49.350514 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:01:49 crc kubenswrapper[4886]: I0129 17:01:49.367279 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" podStartSLOduration=3.3532462 podStartE2EDuration="1m28.36725026s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.915815413 +0000 UTC m=+2306.824534685" lastFinishedPulling="2026-01-29 17:01:48.929819463 +0000 UTC m=+2391.838538745" observedRunningTime="2026-01-29 17:01:49.361744649 +0000 UTC m=+2392.270463931" watchObservedRunningTime="2026-01-29 17:01:49.36725026 +0000 UTC m=+2392.275969552" Jan 29 17:01:50 crc kubenswrapper[4886]: I0129 17:01:50.615237 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:01:50 crc kubenswrapper[4886]: E0129 17:01:50.615639 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:01:52 crc kubenswrapper[4886]: I0129 17:01:52.660097 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kwr4n" Jan 29 17:01:52 crc kubenswrapper[4886]: I0129 17:01:52.678292 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-zpgq2" Jan 29 17:01:52 crc kubenswrapper[4886]: I0129 17:01:52.717105 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c4j5s" Jan 29 17:01:52 crc kubenswrapper[4886]: I0129 17:01:52.751542 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9zqmc" Jan 29 17:01:52 crc kubenswrapper[4886]: I0129 17:01:52.793977 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dxcgn" Jan 29 17:01:52 crc kubenswrapper[4886]: I0129 17:01:52.830393 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8gq2g" Jan 29 17:01:53 crc kubenswrapper[4886]: I0129 17:01:53.310034 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-75495fd598-2hpj4" Jan 29 17:01:53 crc kubenswrapper[4886]: I0129 17:01:53.345406 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-hf95f" Jan 29 17:01:54 crc kubenswrapper[4886]: I0129 17:01:54.372937 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5n28" Jan 29 17:01:54 crc kubenswrapper[4886]: I0129 17:01:54.419682 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh" Jan 29 17:01:55 crc kubenswrapper[4886]: I0129 17:01:55.408258 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" event={"ID":"165231a4-c627-484b-9aab-b4ce3feafe7e","Type":"ContainerStarted","Data":"b52f785a280bd9a7fef88e5f2e155831a76530296552cee1aafe344c231a6f35"} Jan 29 17:01:55 crc kubenswrapper[4886]: I0129 17:01:55.439418 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffdr9" podStartSLOduration=4.780321854 podStartE2EDuration="1m33.439398421s" podCreationTimestamp="2026-01-29 17:00:22 +0000 UTC" firstStartedPulling="2026-01-29 17:00:26.063449959 +0000 UTC m=+2308.972169231" lastFinishedPulling="2026-01-29 17:01:54.722526526 +0000 UTC m=+2397.631245798" observedRunningTime="2026-01-29 17:01:55.430084504 +0000 UTC m=+2398.338803816" watchObservedRunningTime="2026-01-29 17:01:55.439398421 +0000 UTC m=+2398.348117693" Jan 29 17:02:01 crc kubenswrapper[4886]: I0129 17:02:01.615414 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:02:01 crc kubenswrapper[4886]: E0129 17:02:01.616544 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:02:02 crc kubenswrapper[4886]: I0129 17:02:02.432258 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rhxnz" Jan 29 17:02:15 crc kubenswrapper[4886]: I0129 17:02:15.615229 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:02:15 crc kubenswrapper[4886]: E0129 17:02:15.616082 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.942517 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pmcr7"] Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.945692 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.948432 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.949357 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.949617 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.952916 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rvlpg" Jan 29 17:02:20 crc kubenswrapper[4886]: I0129 17:02:20.959060 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pmcr7"] Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.030408 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4cgwx"] Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.032623 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.035604 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.039044 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4cgwx"] Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.124787 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jjt\" (UniqueName: \"kubernetes.io/projected/2f1c4419-6120-44b9-853c-7a42391db3e7-kube-api-access-q7jjt\") pod \"dnsmasq-dns-675f4bcbfc-pmcr7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.124846 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfqx\" (UniqueName: \"kubernetes.io/projected/204a721b-36ee-4631-8358-f2511f332249-kube-api-access-lhfqx\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.124883 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1c4419-6120-44b9-853c-7a42391db3e7-config\") pod \"dnsmasq-dns-675f4bcbfc-pmcr7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.124980 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.125005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-config\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.226688 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.226733 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-config\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.226818 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7jjt\" (UniqueName: \"kubernetes.io/projected/2f1c4419-6120-44b9-853c-7a42391db3e7-kube-api-access-q7jjt\") pod \"dnsmasq-dns-675f4bcbfc-pmcr7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.226840 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhfqx\" (UniqueName: \"kubernetes.io/projected/204a721b-36ee-4631-8358-f2511f332249-kube-api-access-lhfqx\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.226859 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1c4419-6120-44b9-853c-7a42391db3e7-config\") pod \"dnsmasq-dns-675f4bcbfc-pmcr7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.227769 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1c4419-6120-44b9-853c-7a42391db3e7-config\") pod \"dnsmasq-dns-675f4bcbfc-pmcr7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.227788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-config\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.227819 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.247739 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7jjt\" (UniqueName: \"kubernetes.io/projected/2f1c4419-6120-44b9-853c-7a42391db3e7-kube-api-access-q7jjt\") pod \"dnsmasq-dns-675f4bcbfc-pmcr7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.252147 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhfqx\" (UniqueName: \"kubernetes.io/projected/204a721b-36ee-4631-8358-f2511f332249-kube-api-access-lhfqx\") pod \"dnsmasq-dns-78dd6ddcc-4cgwx\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.265940 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.354315 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.775052 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pmcr7"] Jan 29 17:02:21 crc kubenswrapper[4886]: W0129 17:02:21.785858 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f1c4419_6120_44b9_853c_7a42391db3e7.slice/crio-617c1fe920842500bf22662dbcff00fb4394c8a8a4577281f837a4ae20881073 WatchSource:0}: Error finding container 617c1fe920842500bf22662dbcff00fb4394c8a8a4577281f837a4ae20881073: Status 404 returned error can't find the container with id 617c1fe920842500bf22662dbcff00fb4394c8a8a4577281f837a4ae20881073 Jan 29 17:02:21 crc kubenswrapper[4886]: I0129 17:02:21.866371 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4cgwx"] Jan 29 17:02:21 crc kubenswrapper[4886]: W0129 17:02:21.875096 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204a721b_36ee_4631_8358_f2511f332249.slice/crio-b0ce5d271c3a87e35c87ccbefa1e0c1a96ac0ecd541d22ead6b84099a6bd1679 WatchSource:0}: Error finding container b0ce5d271c3a87e35c87ccbefa1e0c1a96ac0ecd541d22ead6b84099a6bd1679: Status 404 returned error can't find the container with id b0ce5d271c3a87e35c87ccbefa1e0c1a96ac0ecd541d22ead6b84099a6bd1679 Jan 29 17:02:22 crc kubenswrapper[4886]: I0129 17:02:22.664336 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" event={"ID":"2f1c4419-6120-44b9-853c-7a42391db3e7","Type":"ContainerStarted","Data":"617c1fe920842500bf22662dbcff00fb4394c8a8a4577281f837a4ae20881073"} Jan 29 17:02:22 crc kubenswrapper[4886]: I0129 17:02:22.666235 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" event={"ID":"204a721b-36ee-4631-8358-f2511f332249","Type":"ContainerStarted","Data":"b0ce5d271c3a87e35c87ccbefa1e0c1a96ac0ecd541d22ead6b84099a6bd1679"} Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.734053 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pmcr7"] Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.772730 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tn5pt"] Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.774709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.785930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tn5pt"] Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.888515 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.888607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-config\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.888680 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zcd\" (UniqueName: \"kubernetes.io/projected/3748c627-3deb-4b89-acd3-2269f42ba343-kube-api-access-x6zcd\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.990786 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zcd\" (UniqueName: \"kubernetes.io/projected/3748c627-3deb-4b89-acd3-2269f42ba343-kube-api-access-x6zcd\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.990973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.991042 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-config\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.992123 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-config\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:23 crc kubenswrapper[4886]: I0129 17:02:23.992271 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.015070 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zcd\" (UniqueName: \"kubernetes.io/projected/3748c627-3deb-4b89-acd3-2269f42ba343-kube-api-access-x6zcd\") pod \"dnsmasq-dns-666b6646f7-tn5pt\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.122317 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.143595 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4cgwx"] Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.164281 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bqbqx"] Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.166021 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.180262 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bqbqx"] Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.310661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb44s\" (UniqueName: \"kubernetes.io/projected/6508ccc6-d71f-449d-bbe1-83270d005815-kube-api-access-kb44s\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.310940 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.311221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-config\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.412512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-config\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.412625 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb44s\" (UniqueName: \"kubernetes.io/projected/6508ccc6-d71f-449d-bbe1-83270d005815-kube-api-access-kb44s\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.412661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.413690 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-config\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.414085 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.444066 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb44s\" (UniqueName: \"kubernetes.io/projected/6508ccc6-d71f-449d-bbe1-83270d005815-kube-api-access-kb44s\") pod \"dnsmasq-dns-57d769cc4f-bqbqx\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.569641 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.733010 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tn5pt"] Jan 29 17:02:24 crc kubenswrapper[4886]: W0129 17:02:24.741681 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3748c627_3deb_4b89_acd3_2269f42ba343.slice/crio-5ab6a774b30c4926836ad5d20a9d8ca3a61ba5556b7b5bbd72dc9a90a6ac1502 WatchSource:0}: Error finding container 5ab6a774b30c4926836ad5d20a9d8ca3a61ba5556b7b5bbd72dc9a90a6ac1502: Status 404 returned error can't find the container with id 5ab6a774b30c4926836ad5d20a9d8ca3a61ba5556b7b5bbd72dc9a90a6ac1502 Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.902704 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.904762 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.944077 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wvnrk" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.945389 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.945947 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.946143 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.946618 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.959619 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.962732 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.981869 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:02:24 crc kubenswrapper[4886]: I0129 17:02:24.995462 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:24.998376 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.004704 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.008021 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.013836 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.023897 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.047779 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.047833 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.047859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.047972 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b0be43b-8956-45aa-ad50-de9183b3fea3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048047 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048081 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048229 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048276 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048499 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbz9\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-kube-api-access-vpbz9\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.048672 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b0be43b-8956-45aa-ad50-de9183b3fea3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.099182 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bqbqx"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.150909 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151213 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qmm\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-kube-api-access-67qmm\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151269 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151299 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151342 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151363 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151382 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-server-conf\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151434 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-config-data\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151456 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151515 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpbz9\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-kube-api-access-vpbz9\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151546 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151602 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b0be43b-8956-45aa-ad50-de9183b3fea3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151620 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv64g\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-kube-api-access-hv64g\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151706 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151746 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-config-data\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-server-conf\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151832 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151847 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151863 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151920 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151943 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/842bfe4d-04ba-4143-9076-3033163c7b82-pod-info\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.151986 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b0be43b-8956-45aa-ad50-de9183b3fea3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.152010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/842bfe4d-04ba-4143-9076-3033163c7b82-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.152031 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-pod-info\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.152077 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.152100 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.154128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.154158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.154768 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.155203 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b0be43b-8956-45aa-ad50-de9183b3fea3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.155290 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.159250 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b0be43b-8956-45aa-ad50-de9183b3fea3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.159611 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b0be43b-8956-45aa-ad50-de9183b3fea3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.159629 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.160500 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d213695ed3765abf3a041dd1be7937f5b64f87e22fac48d2c805fc17dc0e08a3/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.161190 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.161570 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.177266 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpbz9\" (UniqueName: \"kubernetes.io/projected/2b0be43b-8956-45aa-ad50-de9183b3fea3-kube-api-access-vpbz9\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.217511 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae4636fd-e9b4-4ea8-ae5f-484166bf5cbc\") pod \"rabbitmq-server-0\" (UID: \"2b0be43b-8956-45aa-ad50-de9183b3fea3\") " pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256009 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256088 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-config-data\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256106 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-server-conf\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256150 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/842bfe4d-04ba-4143-9076-3033163c7b82-pod-info\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256190 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/842bfe4d-04ba-4143-9076-3033163c7b82-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-pod-info\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256259 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qmm\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-kube-api-access-67qmm\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256411 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-server-conf\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256429 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256453 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-config-data\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256475 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256498 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256557 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256576 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.256611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv64g\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-kube-api-access-hv64g\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.257337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.258460 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-server-conf\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.258810 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-config-data\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.259147 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.259175 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.264815 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.265558 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-server-conf\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.265935 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/842bfe4d-04ba-4143-9076-3033163c7b82-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.266506 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/842bfe4d-04ba-4143-9076-3033163c7b82-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.267473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.268380 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.271916 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/326093cb55c704f9a2105b595679c793cb8447479f9731f8a7fd148174243d7a/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.269269 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.269569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.270230 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-config-data\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.271644 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.269466 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/842bfe4d-04ba-4143-9076-3033163c7b82-pod-info\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.280122 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.280275 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b05e7df932d194e194076bc038f6db5e1e433307caecab672c694750eca73b77/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.281323 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-pod-info\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.281946 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.287068 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.291030 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.296085 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.298574 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.298647 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.298708 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.298871 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.299113 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pch54" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.299180 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.309441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qmm\" (UniqueName: \"kubernetes.io/projected/49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10-kube-api-access-67qmm\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.313383 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.313539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.328764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv64g\" (UniqueName: \"kubernetes.io/projected/842bfe4d-04ba-4143-9076-3033163c7b82-kube-api-access-hv64g\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.336459 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364509 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364593 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364616 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364690 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364742 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364759 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9d0db9ae-746b-419a-bc61-bf85645d2bff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9d0db9ae-746b-419a-bc61-bf85645d2bff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbmf\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-kube-api-access-bpbmf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364838 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.364878 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.391392 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffb99285-fad5-4b64-a7c1-8c79996a97a0\") pod \"rabbitmq-server-2\" (UID: \"842bfe4d-04ba-4143-9076-3033163c7b82\") " pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.432830 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-863286b1-f8a7-473e-bfad-effd8e0e46c7\") pod \"rabbitmq-server-1\" (UID: \"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10\") " pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.442746 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466653 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466713 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466746 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466789 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466812 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9d0db9ae-746b-419a-bc61-bf85645d2bff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466837 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9d0db9ae-746b-419a-bc61-bf85645d2bff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbmf\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-kube-api-access-bpbmf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.466935 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.467844 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.468158 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.471962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.472816 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.473143 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9d0db9ae-746b-419a-bc61-bf85645d2bff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.481571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9d0db9ae-746b-419a-bc61-bf85645d2bff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.484115 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.486233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.489979 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9d0db9ae-746b-419a-bc61-bf85645d2bff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.498626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbmf\" (UniqueName: \"kubernetes.io/projected/9d0db9ae-746b-419a-bc61-bf85645d2bff-kube-api-access-bpbmf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.636742 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.666291 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.666341 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bee345f7e070967b6cc29d6dbc72d8fe7f7c7012e7f3befd39c45a65d0513986/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.725569 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" event={"ID":"6508ccc6-d71f-449d-bbe1-83270d005815","Type":"ContainerStarted","Data":"3cb5dbf55000d2d62fd9df0707aa0b2ae3790c985165faca182a19e1e38e6908"} Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.755606 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12dabd5a-7f4d-4d12-a40b-12125ccd9878\") pod \"rabbitmq-cell1-server-0\" (UID: \"9d0db9ae-746b-419a-bc61-bf85645d2bff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.768937 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" event={"ID":"3748c627-3deb-4b89-acd3-2269f42ba343","Type":"ContainerStarted","Data":"5ab6a774b30c4926836ad5d20a9d8ca3a61ba5556b7b5bbd72dc9a90a6ac1502"} Jan 29 17:02:25 crc kubenswrapper[4886]: I0129 17:02:25.965203 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.127768 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.211561 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.399065 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 29 17:02:26 crc kubenswrapper[4886]: W0129 17:02:26.456049 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod842bfe4d_04ba_4143_9076_3033163c7b82.slice/crio-a37b33399d781fa177e976ceeb1b5940ed29651715b90f0db3dbe52f088dc68f WatchSource:0}: Error finding container a37b33399d781fa177e976ceeb1b5940ed29651715b90f0db3dbe52f088dc68f: Status 404 returned error can't find the container with id a37b33399d781fa177e976ceeb1b5940ed29651715b90f0db3dbe52f088dc68f Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.477607 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.480580 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.490138 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.492829 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.493031 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jhmnh" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.493148 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.494111 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.543704 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.614804 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:02:26 crc kubenswrapper[4886]: E0129 17:02:26.615089 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.623453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.623507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/98bed306-aa68-4e53-affc-e04497079ccb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.623706 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.623896 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bed306-aa68-4e53-affc-e04497079ccb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.623964 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-kolla-config\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.624100 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/98bed306-aa68-4e53-affc-e04497079ccb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.624823 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mz6\" (UniqueName: \"kubernetes.io/projected/98bed306-aa68-4e53-affc-e04497079ccb-kube-api-access-x2mz6\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.625113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-config-data-default\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.728537 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/98bed306-aa68-4e53-affc-e04497079ccb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.729059 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.729225 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/98bed306-aa68-4e53-affc-e04497079ccb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.729244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bed306-aa68-4e53-affc-e04497079ccb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.730515 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-kolla-config\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.730626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/98bed306-aa68-4e53-affc-e04497079ccb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.730651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mz6\" (UniqueName: \"kubernetes.io/projected/98bed306-aa68-4e53-affc-e04497079ccb-kube-api-access-x2mz6\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.730869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-config-data-default\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.730940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.733436 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-config-data-default\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.733704 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.735640 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.737578 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.737815 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3752126cfed518dabb57802d31fe1f9ab6a18ac412e8a3d2f0a6cf445251bd07/globalmount\"" pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.738998 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/98bed306-aa68-4e53-affc-e04497079ccb-kolla-config\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.742124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bed306-aa68-4e53-affc-e04497079ccb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.746138 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/98bed306-aa68-4e53-affc-e04497079ccb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.751922 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mz6\" (UniqueName: \"kubernetes.io/projected/98bed306-aa68-4e53-affc-e04497079ccb-kube-api-access-x2mz6\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.795085 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95b82a4-c681-4c74-b958-f29b26ce56ea\") pod \"openstack-galera-0\" (UID: \"98bed306-aa68-4e53-affc-e04497079ccb\") " pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.822306 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.870536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"842bfe4d-04ba-4143-9076-3033163c7b82","Type":"ContainerStarted","Data":"a37b33399d781fa177e976ceeb1b5940ed29651715b90f0db3dbe52f088dc68f"} Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.874648 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10","Type":"ContainerStarted","Data":"f8f1b5546a85023fcb8e48d8f18ea19083d41ec9d738804c59ee6271fe642723"} Jan 29 17:02:26 crc kubenswrapper[4886]: I0129 17:02:26.890890 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b0be43b-8956-45aa-ad50-de9183b3fea3","Type":"ContainerStarted","Data":"3b52df94d505c7f7b34cd527062caeb6a596ff835c3122d7c780516aec2c0f6d"} Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.458295 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:02:27 crc kubenswrapper[4886]: W0129 17:02:27.520485 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98bed306_aa68_4e53_affc_e04497079ccb.slice/crio-b56dd68fc17b84a407ba9baf75650d619ea6c98198893b53c62470f66159797d WatchSource:0}: Error finding container b56dd68fc17b84a407ba9baf75650d619ea6c98198893b53c62470f66159797d: Status 404 returned error can't find the container with id b56dd68fc17b84a407ba9baf75650d619ea6c98198893b53c62470f66159797d Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.818434 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.820022 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.829767 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gp68l" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.830015 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.830131 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.830241 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.840930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.925293 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9d0db9ae-746b-419a-bc61-bf85645d2bff","Type":"ContainerStarted","Data":"8e3c7aa1c69a329a7427b4ac8e75a6ba30bf1c14cd9bec54b7145d363fed3093"} Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.942307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"98bed306-aa68-4e53-affc-e04497079ccb","Type":"ContainerStarted","Data":"b56dd68fc17b84a407ba9baf75650d619ea6c98198893b53c62470f66159797d"} Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.971501 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7khl\" (UniqueName: \"kubernetes.io/projected/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-kube-api-access-k7khl\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.971563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.973879 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.973948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.974015 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.974078 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.974343 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:27 crc kubenswrapper[4886]: I0129 17:02:27.974415 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077093 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077180 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7khl\" (UniqueName: \"kubernetes.io/projected/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-kube-api-access-k7khl\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077258 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077364 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.077674 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.078052 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.080201 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.081820 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.093309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.093314 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.093419 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ea550bbf1bf2f4ac54a6894dfc3a6d7f2959dcdb917de414b494340871d563d/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.093537 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.096233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7khl\" (UniqueName: \"kubernetes.io/projected/954d7d1e-fd92-4c83-87d8-87a1f866dbbe-kube-api-access-k7khl\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.154906 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.156184 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.159841 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m5568" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.159998 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.160084 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.162437 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4b33e1-211c-4727-b145-8a8e2e359423\") pod \"openstack-cell1-galera-0\" (UID: \"954d7d1e-fd92-4c83-87d8-87a1f866dbbe\") " pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.194585 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.286481 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c8ef15-a2b1-41df-8048-752b56d26653-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.286662 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88c8ef15-a2b1-41df-8048-752b56d26653-config-data\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.286945 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c8ef15-a2b1-41df-8048-752b56d26653-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.287044 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c8ef15-a2b1-41df-8048-752b56d26653-kolla-config\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.287070 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vq5l\" (UniqueName: \"kubernetes.io/projected/88c8ef15-a2b1-41df-8048-752b56d26653-kube-api-access-4vq5l\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.388959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c8ef15-a2b1-41df-8048-752b56d26653-kolla-config\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.389027 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vq5l\" (UniqueName: \"kubernetes.io/projected/88c8ef15-a2b1-41df-8048-752b56d26653-kube-api-access-4vq5l\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.389082 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c8ef15-a2b1-41df-8048-752b56d26653-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.389168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88c8ef15-a2b1-41df-8048-752b56d26653-config-data\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.389352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c8ef15-a2b1-41df-8048-752b56d26653-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.390554 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c8ef15-a2b1-41df-8048-752b56d26653-kolla-config\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.391306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88c8ef15-a2b1-41df-8048-752b56d26653-config-data\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.394921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c8ef15-a2b1-41df-8048-752b56d26653-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.416174 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vq5l\" (UniqueName: \"kubernetes.io/projected/88c8ef15-a2b1-41df-8048-752b56d26653-kube-api-access-4vq5l\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.418927 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c8ef15-a2b1-41df-8048-752b56d26653-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88c8ef15-a2b1-41df-8048-752b56d26653\") " pod="openstack/memcached-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.458608 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 17:02:28 crc kubenswrapper[4886]: I0129 17:02:28.494849 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 17:02:29 crc kubenswrapper[4886]: I0129 17:02:29.091108 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 17:02:29 crc kubenswrapper[4886]: I0129 17:02:29.385461 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.032731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"88c8ef15-a2b1-41df-8048-752b56d26653","Type":"ContainerStarted","Data":"1a197767c7bcdfe8876ec470e270c663a1a0267890c843f41fe09eab1488fbab"} Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.035372 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"954d7d1e-fd92-4c83-87d8-87a1f866dbbe","Type":"ContainerStarted","Data":"5f8cdea6298d66d3f2be7ec07d09f99ef9e582a064f6a58e14fe6629079ba303"} Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.698345 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.700239 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.704567 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cn78q" Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.731191 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.767297 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrp8r\" (UniqueName: \"kubernetes.io/projected/dba0c99a-0f14-42bd-8822-ee79fc73ee41-kube-api-access-xrp8r\") pod \"kube-state-metrics-0\" (UID: \"dba0c99a-0f14-42bd-8822-ee79fc73ee41\") " pod="openstack/kube-state-metrics-0" Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.874040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrp8r\" (UniqueName: \"kubernetes.io/projected/dba0c99a-0f14-42bd-8822-ee79fc73ee41-kube-api-access-xrp8r\") pod \"kube-state-metrics-0\" (UID: \"dba0c99a-0f14-42bd-8822-ee79fc73ee41\") " pod="openstack/kube-state-metrics-0" Jan 29 17:02:30 crc kubenswrapper[4886]: I0129 17:02:30.908489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrp8r\" (UniqueName: \"kubernetes.io/projected/dba0c99a-0f14-42bd-8822-ee79fc73ee41-kube-api-access-xrp8r\") pod \"kube-state-metrics-0\" (UID: \"dba0c99a-0f14-42bd-8822-ee79fc73ee41\") " pod="openstack/kube-state-metrics-0" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.032972 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.561809 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c"] Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.563550 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.575782 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.576084 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-2xk9f" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.588895 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c"] Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.698926 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1da890-a690-46b4-95aa-3f282b3cdc30-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.698997 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmcn5\" (UniqueName: \"kubernetes.io/projected/ee1da890-a690-46b4-95aa-3f282b3cdc30-kube-api-access-bmcn5\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.800634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1da890-a690-46b4-95aa-3f282b3cdc30-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.800723 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmcn5\" (UniqueName: \"kubernetes.io/projected/ee1da890-a690-46b4-95aa-3f282b3cdc30-kube-api-access-bmcn5\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:31 crc kubenswrapper[4886]: E0129 17:02:31.800818 4886 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 29 17:02:31 crc kubenswrapper[4886]: E0129 17:02:31.800903 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1da890-a690-46b4-95aa-3f282b3cdc30-serving-cert podName:ee1da890-a690-46b4-95aa-3f282b3cdc30 nodeName:}" failed. No retries permitted until 2026-01-29 17:02:32.300885685 +0000 UTC m=+2435.209604957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ee1da890-a690-46b4-95aa-3f282b3cdc30-serving-cert") pod "observability-ui-dashboards-66cbf594b5-ld46c" (UID: "ee1da890-a690-46b4-95aa-3f282b3cdc30") : secret "observability-ui-dashboards" not found Jan 29 17:02:31 crc kubenswrapper[4886]: I0129 17:02:31.846471 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmcn5\" (UniqueName: \"kubernetes.io/projected/ee1da890-a690-46b4-95aa-3f282b3cdc30-kube-api-access-bmcn5\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.012184 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69c97cc7f-npplt"] Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.030740 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.059375 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69c97cc7f-npplt"] Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.091536 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-trusted-ca-bundle\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113499 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-oauth-serving-cert\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113539 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-oauth-config\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113678 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w979x\" (UniqueName: \"kubernetes.io/projected/57bcd464-9c19-451c-b1e7-ec31c75da5dd-kube-api-access-w979x\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-config\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-service-ca\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.113827 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-serving-cert\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.128801 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.145757 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.145949 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.146090 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.146206 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.146709 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.147463 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.147566 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.147657 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.150528 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gbmnx" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217510 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ce7955a1-eb58-425a-872a-7ec102b8e090-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217567 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w979x\" (UniqueName: \"kubernetes.io/projected/57bcd464-9c19-451c-b1e7-ec31c75da5dd-kube-api-access-w979x\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217606 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-config\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-service-ca\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-serving-cert\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217702 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217741 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217796 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-config\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217820 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217852 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217919 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-trusted-ca-bundle\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-oauth-serving-cert\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cnt\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-kube-api-access-w2cnt\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217980 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-oauth-config\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.217998 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.218027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.219276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-config\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.219683 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-service-ca\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.220178 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-trusted-ca-bundle\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.220761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/57bcd464-9c19-451c-b1e7-ec31c75da5dd-oauth-serving-cert\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.230429 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-serving-cert\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.247599 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/57bcd464-9c19-451c-b1e7-ec31c75da5dd-console-oauth-config\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.251038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w979x\" (UniqueName: \"kubernetes.io/projected/57bcd464-9c19-451c-b1e7-ec31c75da5dd-kube-api-access-w979x\") pod \"console-69c97cc7f-npplt\" (UID: \"57bcd464-9c19-451c-b1e7-ec31c75da5dd\") " pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319366 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-config\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319451 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319492 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1da890-a690-46b4-95aa-3f282b3cdc30-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319540 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cnt\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-kube-api-access-w2cnt\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319558 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319584 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319614 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ce7955a1-eb58-425a-872a-7ec102b8e090-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319682 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319711 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.319732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.320508 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.324755 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.324824 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b5b0b1c62be5d324bfe10f676e08a70a611b72b2c99a9227275ea9ec17aa7e0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.326583 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ce7955a1-eb58-425a-872a-7ec102b8e090-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.327641 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.327891 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.331459 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1da890-a690-46b4-95aa-3f282b3cdc30-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ld46c\" (UID: \"ee1da890-a690-46b4-95aa-3f282b3cdc30\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.331698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.334169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.336021 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-config\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.337903 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.348409 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cnt\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-kube-api-access-w2cnt\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.383020 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.383544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.462063 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 17:02:32 crc kubenswrapper[4886]: I0129 17:02:32.511481 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.369820 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b7d9p"] Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.371217 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.375427 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.380349 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.380590 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xd2tq" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.390211 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7d9p"] Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.445566 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xhds2"] Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.453082 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.462200 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xhds2"] Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544b4515-481c-47f1-acb6-ed332a3497d4-combined-ca-bundle\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463538 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wkw\" (UniqueName: \"kubernetes.io/projected/544b4515-481c-47f1-acb6-ed332a3497d4-kube-api-access-p7wkw\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-log-ovn\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463597 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/544b4515-481c-47f1-acb6-ed332a3497d4-ovn-controller-tls-certs\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-run-ovn\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463701 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/544b4515-481c-47f1-acb6-ed332a3497d4-scripts\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.463758 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-run\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565471 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/544b4515-481c-47f1-acb6-ed332a3497d4-scripts\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565534 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-etc-ovs\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565568 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcb4\" (UniqueName: \"kubernetes.io/projected/03dc141f-69cc-4cb4-af0b-acf85642b86e-kube-api-access-rdcb4\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565625 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-run\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565641 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03dc141f-69cc-4cb4-af0b-acf85642b86e-scripts\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565678 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-log\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565713 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-run\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544b4515-481c-47f1-acb6-ed332a3497d4-combined-ca-bundle\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565777 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wkw\" (UniqueName: \"kubernetes.io/projected/544b4515-481c-47f1-acb6-ed332a3497d4-kube-api-access-p7wkw\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565795 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-log-ovn\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/544b4515-481c-47f1-acb6-ed332a3497d4-ovn-controller-tls-certs\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565836 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-run-ovn\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.565878 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-lib\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.593478 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544b4515-481c-47f1-acb6-ed332a3497d4-combined-ca-bundle\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.598895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/544b4515-481c-47f1-acb6-ed332a3497d4-ovn-controller-tls-certs\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.624864 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wkw\" (UniqueName: \"kubernetes.io/projected/544b4515-481c-47f1-acb6-ed332a3497d4-kube-api-access-p7wkw\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.671403 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-lib\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.671465 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-etc-ovs\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.671508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdcb4\" (UniqueName: \"kubernetes.io/projected/03dc141f-69cc-4cb4-af0b-acf85642b86e-kube-api-access-rdcb4\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.671563 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03dc141f-69cc-4cb4-af0b-acf85642b86e-scripts\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.671597 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-log\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.671617 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-run\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.673284 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-etc-ovs\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.674245 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03dc141f-69cc-4cb4-af0b-acf85642b86e-scripts\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.706713 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdcb4\" (UniqueName: \"kubernetes.io/projected/03dc141f-69cc-4cb4-af0b-acf85642b86e-kube-api-access-rdcb4\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.957170 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/544b4515-481c-47f1-acb6-ed332a3497d4-scripts\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.957872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-run\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.958020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-log-ovn\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.958198 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/544b4515-481c-47f1-acb6-ed332a3497d4-var-run-ovn\") pod \"ovn-controller-b7d9p\" (UID: \"544b4515-481c-47f1-acb6-ed332a3497d4\") " pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.958535 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-run\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.958613 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-lib\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.958702 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/03dc141f-69cc-4cb4-af0b-acf85642b86e-var-log\") pod \"ovn-controller-ovs-xhds2\" (UID: \"03dc141f-69cc-4cb4-af0b-acf85642b86e\") " pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:33 crc kubenswrapper[4886]: I0129 17:02:33.994851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7d9p" Jan 29 17:02:34 crc kubenswrapper[4886]: I0129 17:02:34.103105 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.256454 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.258696 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.261261 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zjp5g" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.267235 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.267429 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.267631 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.268126 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.274946 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.356636 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.356748 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39601bb5-f2bc-47a6-824a-609c207b963f-config\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.356827 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xhx9\" (UniqueName: \"kubernetes.io/projected/39601bb5-f2bc-47a6-824a-609c207b963f-kube-api-access-5xhx9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.356890 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39601bb5-f2bc-47a6-824a-609c207b963f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.357035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.357067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.357120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/39601bb5-f2bc-47a6-824a-609c207b963f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.357225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459238 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459358 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/39601bb5-f2bc-47a6-824a-609c207b963f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459436 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39601bb5-f2bc-47a6-824a-609c207b963f-config\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xhx9\" (UniqueName: \"kubernetes.io/projected/39601bb5-f2bc-47a6-824a-609c207b963f-kube-api-access-5xhx9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.459654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39601bb5-f2bc-47a6-824a-609c207b963f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.460544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/39601bb5-f2bc-47a6-824a-609c207b963f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.461070 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39601bb5-f2bc-47a6-824a-609c207b963f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.461360 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39601bb5-f2bc-47a6-824a-609c207b963f-config\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.463076 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.463114 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cf9dd5b9a6bbdfffd591daeb645dac0dc01e8f7deb302127ed56fc967835337/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.464589 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.464921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.465557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39601bb5-f2bc-47a6-824a-609c207b963f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.482908 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xhx9\" (UniqueName: \"kubernetes.io/projected/39601bb5-f2bc-47a6-824a-609c207b963f-kube-api-access-5xhx9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.495742 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.497680 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.500230 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.500511 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.500806 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.500901 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-q9lrf" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.522937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfc2829b-4c70-4482-9f64-05fedd0caae9\") pod \"ovsdbserver-nb-0\" (UID: \"39601bb5-f2bc-47a6-824a-609c207b963f\") " pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.569055 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b015d0c-8672-450a-a079-965cc4ccd07f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.569462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b015d0c-8672-450a-a079-965cc4ccd07f-config\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.569636 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b015d0c-8672-450a-a079-965cc4ccd07f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.569767 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.569963 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.570215 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.570351 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzlz\" (UniqueName: \"kubernetes.io/projected/7b015d0c-8672-450a-a079-965cc4ccd07f-kube-api-access-vlzlz\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.570441 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.572120 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.586784 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.672277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.672788 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.672857 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzlz\" (UniqueName: \"kubernetes.io/projected/7b015d0c-8672-450a-a079-965cc4ccd07f-kube-api-access-vlzlz\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.672940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.673003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b015d0c-8672-450a-a079-965cc4ccd07f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.673118 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b015d0c-8672-450a-a079-965cc4ccd07f-config\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.673263 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b015d0c-8672-450a-a079-965cc4ccd07f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.673297 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.674172 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b015d0c-8672-450a-a079-965cc4ccd07f-config\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.674791 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b015d0c-8672-450a-a079-965cc4ccd07f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.676123 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b015d0c-8672-450a-a079-965cc4ccd07f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.680304 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.680379 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/297149745560dc1f1ff1e411a84efac3cc898ea24d98a3f7b5a3d7276b7eb1e8/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.680577 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.680732 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.693740 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b015d0c-8672-450a-a079-965cc4ccd07f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.709574 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzlz\" (UniqueName: \"kubernetes.io/projected/7b015d0c-8672-450a-a079-965cc4ccd07f-kube-api-access-vlzlz\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.724494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6c9a9a-cc72-46ff-b530-2325a25d9ef0\") pod \"ovsdbserver-sb-0\" (UID: \"7b015d0c-8672-450a-a079-965cc4ccd07f\") " pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:37 crc kubenswrapper[4886]: I0129 17:02:37.887428 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 17:02:41 crc kubenswrapper[4886]: I0129 17:02:41.617241 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:02:41 crc kubenswrapper[4886]: E0129 17:02:41.618246 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:02:56 crc kubenswrapper[4886]: I0129 17:02:56.615520 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:02:56 crc kubenswrapper[4886]: E0129 17:02:56.616711 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:03:11 crc kubenswrapper[4886]: I0129 17:03:11.615993 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:03:11 crc kubenswrapper[4886]: E0129 17:03:11.617660 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:03:14 crc kubenswrapper[4886]: E0129 17:03:14.655447 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 17:03:14 crc kubenswrapper[4886]: E0129 17:03:14.655999 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hv64g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(842bfe4d-04ba-4143-9076-3033163c7b82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:14 crc kubenswrapper[4886]: E0129 17:03:14.657234 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="842bfe4d-04ba-4143-9076-3033163c7b82" Jan 29 17:03:14 crc kubenswrapper[4886]: E0129 17:03:14.676562 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 17:03:14 crc kubenswrapper[4886]: E0129 17:03:14.676760 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67qmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:14 crc kubenswrapper[4886]: E0129 17:03:14.678003 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" Jan 29 17:03:15 crc kubenswrapper[4886]: E0129 17:03:15.555559 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" Jan 29 17:03:15 crc kubenswrapper[4886]: E0129 17:03:15.555634 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="842bfe4d-04ba-4143-9076-3033163c7b82" Jan 29 17:03:20 crc kubenswrapper[4886]: E0129 17:03:20.489100 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 17:03:20 crc kubenswrapper[4886]: E0129 17:03:20.490290 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpbz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2b0be43b-8956-45aa-ad50-de9183b3fea3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:20 crc kubenswrapper[4886]: E0129 17:03:20.492062 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2b0be43b-8956-45aa-ad50-de9183b3fea3" Jan 29 17:03:20 crc kubenswrapper[4886]: E0129 17:03:20.600916 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2b0be43b-8956-45aa-ad50-de9183b3fea3" Jan 29 17:03:23 crc kubenswrapper[4886]: I0129 17:03:23.614798 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:03:23 crc kubenswrapper[4886]: E0129 17:03:23.615517 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:03:25 crc kubenswrapper[4886]: E0129 17:03:25.291967 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 17:03:25 crc kubenswrapper[4886]: E0129 17:03:25.292245 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhfqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-4cgwx_openstack(204a721b-36ee-4631-8358-f2511f332249): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:25 crc kubenswrapper[4886]: E0129 17:03:25.294185 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" podUID="204a721b-36ee-4631-8358-f2511f332249" Jan 29 17:03:26 crc kubenswrapper[4886]: E0129 17:03:26.647371 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 17:03:26 crc kubenswrapper[4886]: E0129 17:03:26.647898 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpbmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9d0db9ae-746b-419a-bc61-bf85645d2bff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:26 crc kubenswrapper[4886]: E0129 17:03:26.649785 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9d0db9ae-746b-419a-bc61-bf85645d2bff" Jan 29 17:03:26 crc kubenswrapper[4886]: E0129 17:03:26.677496 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9d0db9ae-746b-419a-bc61-bf85645d2bff" Jan 29 17:03:30 crc kubenswrapper[4886]: E0129 17:03:30.083913 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 17:03:30 crc kubenswrapper[4886]: E0129 17:03:30.084624 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7jjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pmcr7_openstack(2f1c4419-6120-44b9-853c-7a42391db3e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:30 crc kubenswrapper[4886]: E0129 17:03:30.085842 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" podUID="2f1c4419-6120-44b9-853c-7a42391db3e7" Jan 29 17:03:31 crc kubenswrapper[4886]: E0129 17:03:31.691991 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 17:03:31 crc kubenswrapper[4886]: E0129 17:03:31.692937 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6zcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-tn5pt_openstack(3748c627-3deb-4b89-acd3-2269f42ba343): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:31 crc kubenswrapper[4886]: E0129 17:03:31.694122 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" Jan 29 17:03:31 crc kubenswrapper[4886]: E0129 17:03:31.715570 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.508175 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.519093 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.615897 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-dns-svc\") pod \"204a721b-36ee-4631-8358-f2511f332249\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.615988 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1c4419-6120-44b9-853c-7a42391db3e7-config\") pod \"2f1c4419-6120-44b9-853c-7a42391db3e7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.616151 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhfqx\" (UniqueName: \"kubernetes.io/projected/204a721b-36ee-4631-8358-f2511f332249-kube-api-access-lhfqx\") pod \"204a721b-36ee-4631-8358-f2511f332249\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.616248 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7jjt\" (UniqueName: \"kubernetes.io/projected/2f1c4419-6120-44b9-853c-7a42391db3e7-kube-api-access-q7jjt\") pod \"2f1c4419-6120-44b9-853c-7a42391db3e7\" (UID: \"2f1c4419-6120-44b9-853c-7a42391db3e7\") " Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.616273 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-config\") pod \"204a721b-36ee-4631-8358-f2511f332249\" (UID: \"204a721b-36ee-4631-8358-f2511f332249\") " Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.621879 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-config" (OuterVolumeSpecName: "config") pod "204a721b-36ee-4631-8358-f2511f332249" (UID: "204a721b-36ee-4631-8358-f2511f332249"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.622235 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "204a721b-36ee-4631-8358-f2511f332249" (UID: "204a721b-36ee-4631-8358-f2511f332249"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.622544 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f1c4419-6120-44b9-853c-7a42391db3e7-config" (OuterVolumeSpecName: "config") pod "2f1c4419-6120-44b9-853c-7a42391db3e7" (UID: "2f1c4419-6120-44b9-853c-7a42391db3e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.629290 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1c4419-6120-44b9-853c-7a42391db3e7-kube-api-access-q7jjt" (OuterVolumeSpecName: "kube-api-access-q7jjt") pod "2f1c4419-6120-44b9-853c-7a42391db3e7" (UID: "2f1c4419-6120-44b9-853c-7a42391db3e7"). InnerVolumeSpecName "kube-api-access-q7jjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.637115 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204a721b-36ee-4631-8358-f2511f332249-kube-api-access-lhfqx" (OuterVolumeSpecName: "kube-api-access-lhfqx") pod "204a721b-36ee-4631-8358-f2511f332249" (UID: "204a721b-36ee-4631-8358-f2511f332249"). InnerVolumeSpecName "kube-api-access-lhfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.718307 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7jjt\" (UniqueName: \"kubernetes.io/projected/2f1c4419-6120-44b9-853c-7a42391db3e7-kube-api-access-q7jjt\") on node \"crc\" DevicePath \"\"" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.718359 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.718371 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/204a721b-36ee-4631-8358-f2511f332249-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.718382 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1c4419-6120-44b9-853c-7a42391db3e7-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.718394 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhfqx\" (UniqueName: \"kubernetes.io/projected/204a721b-36ee-4631-8358-f2511f332249-kube-api-access-lhfqx\") on node \"crc\" DevicePath \"\"" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.741096 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" event={"ID":"204a721b-36ee-4631-8358-f2511f332249","Type":"ContainerDied","Data":"b0ce5d271c3a87e35c87ccbefa1e0c1a96ac0ecd541d22ead6b84099a6bd1679"} Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.741141 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4cgwx" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.742675 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" event={"ID":"2f1c4419-6120-44b9-853c-7a42391db3e7","Type":"ContainerDied","Data":"617c1fe920842500bf22662dbcff00fb4394c8a8a4577281f837a4ae20881073"} Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.742726 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pmcr7" Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.821162 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4cgwx"] Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.840871 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4cgwx"] Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.865476 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pmcr7"] Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.873135 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pmcr7"] Jan 29 17:03:33 crc kubenswrapper[4886]: I0129 17:03:33.998146 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c"] Jan 29 17:03:34 crc kubenswrapper[4886]: E0129 17:03:34.085702 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 17:03:34 crc kubenswrapper[4886]: E0129 17:03:34.086276 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb44s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-bqbqx_openstack(6508ccc6-d71f-449d-bbe1-83270d005815): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:34 crc kubenswrapper[4886]: E0129 17:03:34.087543 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.201232 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.230964 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7d9p"] Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.243933 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69c97cc7f-npplt"] Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.367006 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.625469 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204a721b-36ee-4631-8358-f2511f332249" path="/var/lib/kubelet/pods/204a721b-36ee-4631-8358-f2511f332249/volumes" Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.626272 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1c4419-6120-44b9-853c-7a42391db3e7" path="/var/lib/kubelet/pods/2f1c4419-6120-44b9-853c-7a42391db3e7/volumes" Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.751612 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69c97cc7f-npplt" event={"ID":"57bcd464-9c19-451c-b1e7-ec31c75da5dd","Type":"ContainerStarted","Data":"674a4a84e7a661a8a9f9dcf78ec6c308fb06c693f936096b0c80bdfde2f814ca"} Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.753135 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7d9p" event={"ID":"544b4515-481c-47f1-acb6-ed332a3497d4","Type":"ContainerStarted","Data":"f570984e5e7ce5895c501c3a0b3df5c2874fac80c1bf029801391a0fe3f26640"} Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.754072 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerStarted","Data":"38705f04f0f2e20b7f5d72009f437278994e72d7c6d255707ef36ddaf6f80953"} Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.755111 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" event={"ID":"ee1da890-a690-46b4-95aa-3f282b3cdc30","Type":"ContainerStarted","Data":"5fcc926e1a39bebeb290fc957f217493a2334ebaf02787d1068fc6d4a8c4f42a"} Jan 29 17:03:34 crc kubenswrapper[4886]: I0129 17:03:34.756753 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dba0c99a-0f14-42bd-8822-ee79fc73ee41","Type":"ContainerStarted","Data":"e23683912c13c24ac6376c0e92dd23177282cc9bf4441644e7ddbf8a433b486b"} Jan 29 17:03:35 crc kubenswrapper[4886]: I0129 17:03:35.615262 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:03:35 crc kubenswrapper[4886]: E0129 17:03:35.615739 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:03:35 crc kubenswrapper[4886]: I0129 17:03:35.767154 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69c97cc7f-npplt" event={"ID":"57bcd464-9c19-451c-b1e7-ec31c75da5dd","Type":"ContainerStarted","Data":"201fa2a5b2a106ae890063199356cfaf006a51f40787274a6ba75e8d67e88aaa"} Jan 29 17:03:36 crc kubenswrapper[4886]: I0129 17:03:36.813151 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69c97cc7f-npplt" podStartSLOduration=65.813119711 podStartE2EDuration="1m5.813119711s" podCreationTimestamp="2026-01-29 17:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:03:36.7956357 +0000 UTC m=+2499.704355022" watchObservedRunningTime="2026-01-29 17:03:36.813119711 +0000 UTC m=+2499.721839023" Jan 29 17:03:37 crc kubenswrapper[4886]: E0129 17:03:37.945468 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 29 17:03:37 crc kubenswrapper[4886]: E0129 17:03:37.945747 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n574h5h568h87h5bch65fh68fh74h644h546h64bh68ch9bh79h54ch6ch5b5h69hd9h684hf7h649h68dh54bh66ch656h5fh78h5b9h549hd9h4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vq5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(88c8ef15-a2b1-41df-8048-752b56d26653): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:37 crc kubenswrapper[4886]: E0129 17:03:37.947213 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="88c8ef15-a2b1-41df-8048-752b56d26653" Jan 29 17:03:40 crc kubenswrapper[4886]: E0129 17:03:40.127406 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="88c8ef15-a2b1-41df-8048-752b56d26653" Jan 29 17:03:40 crc kubenswrapper[4886]: E0129 17:03:40.796772 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" Jan 29 17:03:41 crc kubenswrapper[4886]: I0129 17:03:41.752423 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xhds2"] Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.383585 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.383676 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.389714 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.702148 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.843358 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69c97cc7f-npplt" Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.871239 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 17:03:42 crc kubenswrapper[4886]: I0129 17:03:42.927879 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7d44f9f6d-wvkcd"] Jan 29 17:03:42 crc kubenswrapper[4886]: E0129 17:03:42.949043 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 17:03:42 crc kubenswrapper[4886]: E0129 17:03:42.949229 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7khl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(954d7d1e-fd92-4c83-87d8-87a1f866dbbe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:42 crc kubenswrapper[4886]: E0129 17:03:42.950989 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="954d7d1e-fd92-4c83-87d8-87a1f866dbbe" Jan 29 17:03:43 crc kubenswrapper[4886]: E0129 17:03:43.014495 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 17:03:43 crc kubenswrapper[4886]: E0129 17:03:43.014680 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2mz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(98bed306-aa68-4e53-affc-e04497079ccb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:03:43 crc kubenswrapper[4886]: E0129 17:03:43.015838 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="98bed306-aa68-4e53-affc-e04497079ccb" Jan 29 17:03:43 crc kubenswrapper[4886]: E0129 17:03:43.847990 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="98bed306-aa68-4e53-affc-e04497079ccb" Jan 29 17:03:43 crc kubenswrapper[4886]: E0129 17:03:43.848016 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="954d7d1e-fd92-4c83-87d8-87a1f866dbbe" Jan 29 17:03:46 crc kubenswrapper[4886]: I0129 17:03:46.615449 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:03:46 crc kubenswrapper[4886]: E0129 17:03:46.616005 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:03:47 crc kubenswrapper[4886]: W0129 17:03:47.458603 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03dc141f_69cc_4cb4_af0b_acf85642b86e.slice/crio-8a8b9b3d461cc6336b71e2f4a1f54440c360e4b681c82c16795bce27f841af7e WatchSource:0}: Error finding container 8a8b9b3d461cc6336b71e2f4a1f54440c360e4b681c82c16795bce27f841af7e: Status 404 returned error can't find the container with id 8a8b9b3d461cc6336b71e2f4a1f54440c360e4b681c82c16795bce27f841af7e Jan 29 17:03:47 crc kubenswrapper[4886]: I0129 17:03:47.885299 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhds2" event={"ID":"03dc141f-69cc-4cb4-af0b-acf85642b86e","Type":"ContainerStarted","Data":"8a8b9b3d461cc6336b71e2f4a1f54440c360e4b681c82c16795bce27f841af7e"} Jan 29 17:03:47 crc kubenswrapper[4886]: I0129 17:03:47.887115 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7b015d0c-8672-450a-a079-965cc4ccd07f","Type":"ContainerStarted","Data":"55fb09172ecfe543ed3055282effeb7cac42ad3317ded6fadc58a6e1afee04a0"} Jan 29 17:03:48 crc kubenswrapper[4886]: W0129 17:03:48.298959 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39601bb5_f2bc_47a6_824a_609c207b963f.slice/crio-432d536255059a87132e92e40237fe7c882a36d7e32055ccf635103518ecbec9 WatchSource:0}: Error finding container 432d536255059a87132e92e40237fe7c882a36d7e32055ccf635103518ecbec9: Status 404 returned error can't find the container with id 432d536255059a87132e92e40237fe7c882a36d7e32055ccf635103518ecbec9 Jan 29 17:03:48 crc kubenswrapper[4886]: I0129 17:03:48.900391 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"39601bb5-f2bc-47a6-824a-609c207b963f","Type":"ContainerStarted","Data":"432d536255059a87132e92e40237fe7c882a36d7e32055ccf635103518ecbec9"} Jan 29 17:03:49 crc kubenswrapper[4886]: I0129 17:03:49.911938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7d9p" event={"ID":"544b4515-481c-47f1-acb6-ed332a3497d4","Type":"ContainerStarted","Data":"31925dc2b4451bded2a4f8317ce799c155f8528fe1011988d10f0aa3ff739d00"} Jan 29 17:03:49 crc kubenswrapper[4886]: I0129 17:03:49.912466 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-b7d9p" Jan 29 17:03:49 crc kubenswrapper[4886]: I0129 17:03:49.936112 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-b7d9p" podStartSLOduration=61.849424328 podStartE2EDuration="1m16.936090016s" podCreationTimestamp="2026-01-29 17:02:33 +0000 UTC" firstStartedPulling="2026-01-29 17:03:34.257614218 +0000 UTC m=+2497.166333490" lastFinishedPulling="2026-01-29 17:03:49.344279906 +0000 UTC m=+2512.252999178" observedRunningTime="2026-01-29 17:03:49.93008498 +0000 UTC m=+2512.838804262" watchObservedRunningTime="2026-01-29 17:03:49.936090016 +0000 UTC m=+2512.844809288" Jan 29 17:03:51 crc kubenswrapper[4886]: I0129 17:03:51.495430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" event={"ID":"ee1da890-a690-46b4-95aa-3f282b3cdc30","Type":"ContainerStarted","Data":"5688c50792f3c3255c84d31bad3708c97035368da7715c74f1a56056b63a6746"} Jan 29 17:03:51 crc kubenswrapper[4886]: I0129 17:03:51.498024 4886 generic.go:334] "Generic (PLEG): container finished" podID="3748c627-3deb-4b89-acd3-2269f42ba343" containerID="fcac16ce7b565761d87666d9cf26f0b7bab43d40d9fedf5938d903160f00e164" exitCode=0 Jan 29 17:03:51 crc kubenswrapper[4886]: I0129 17:03:51.498071 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" event={"ID":"3748c627-3deb-4b89-acd3-2269f42ba343","Type":"ContainerDied","Data":"fcac16ce7b565761d87666d9cf26f0b7bab43d40d9fedf5938d903160f00e164"} Jan 29 17:03:51 crc kubenswrapper[4886]: I0129 17:03:51.506137 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"842bfe4d-04ba-4143-9076-3033163c7b82","Type":"ContainerStarted","Data":"5c98fb62cf57fb19a685fed0c362721e82c04b5d528f5ad7579c1412f1f79e81"} Jan 29 17:03:51 crc kubenswrapper[4886]: I0129 17:03:51.526589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10","Type":"ContainerStarted","Data":"e164b2712bb12971248661528d0d661417a2f6869697cd179a3843bd4e2721f1"} Jan 29 17:03:51 crc kubenswrapper[4886]: I0129 17:03:51.539515 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ld46c" podStartSLOduration=64.791384381 podStartE2EDuration="1m20.539497287s" podCreationTimestamp="2026-01-29 17:02:31 +0000 UTC" firstStartedPulling="2026-01-29 17:03:34.017638278 +0000 UTC m=+2496.926357550" lastFinishedPulling="2026-01-29 17:03:49.765751184 +0000 UTC m=+2512.674470456" observedRunningTime="2026-01-29 17:03:51.518709255 +0000 UTC m=+2514.427428537" watchObservedRunningTime="2026-01-29 17:03:51.539497287 +0000 UTC m=+2514.448216559" Jan 29 17:03:51 crc kubenswrapper[4886]: E0129 17:03:51.837010 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 17:03:51 crc kubenswrapper[4886]: E0129 17:03:51.837393 4886 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 17:03:51 crc kubenswrapper[4886]: E0129 17:03:51.837622 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrp8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(dba0c99a-0f14-42bd-8822-ee79fc73ee41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 29 17:03:51 crc kubenswrapper[4886]: E0129 17:03:51.839043 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" Jan 29 17:03:52 crc kubenswrapper[4886]: I0129 17:03:52.537542 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9d0db9ae-746b-419a-bc61-bf85645d2bff","Type":"ContainerStarted","Data":"90c62e1af999c12bd3cee48206c3c037d5e41331e61dd2c2d6e99f50a71acbba"} Jan 29 17:03:52 crc kubenswrapper[4886]: I0129 17:03:52.539302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b0be43b-8956-45aa-ad50-de9183b3fea3","Type":"ContainerStarted","Data":"121b418980e461ff82cc0059422b3aec6e494e5fd4c123ffbab962202999757c"} Jan 29 17:03:52 crc kubenswrapper[4886]: E0129 17:03:52.541246 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.577861 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"39601bb5-f2bc-47a6-824a-609c207b963f","Type":"ContainerStarted","Data":"bf239556e30f9137e020bc2a6c81d2fdb898af7395a712452ca6968c9abdf04d"} Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.587910 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7b015d0c-8672-450a-a079-965cc4ccd07f","Type":"ContainerStarted","Data":"45a24140137200a26c74210530849bf906a138a61cb80a258cb55968228dcfec"} Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.592134 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" event={"ID":"3748c627-3deb-4b89-acd3-2269f42ba343","Type":"ContainerStarted","Data":"85f248c363891313b6dfd3563ffece575be09f0a7b8fb96dd58a65634816d1bc"} Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.592487 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.597372 4886 generic.go:334] "Generic (PLEG): container finished" podID="03dc141f-69cc-4cb4-af0b-acf85642b86e" containerID="99eea0285f6f5f01492d9cbe469c801bd291548fbbceb2527113ae1fb3f63482" exitCode=0 Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.597430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhds2" event={"ID":"03dc141f-69cc-4cb4-af0b-acf85642b86e","Type":"ContainerDied","Data":"99eea0285f6f5f01492d9cbe469c801bd291548fbbceb2527113ae1fb3f63482"} Jan 29 17:03:54 crc kubenswrapper[4886]: I0129 17:03:54.616625 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" podStartSLOduration=6.509872275 podStartE2EDuration="1m31.616607507s" podCreationTimestamp="2026-01-29 17:02:23 +0000 UTC" firstStartedPulling="2026-01-29 17:02:24.744303361 +0000 UTC m=+2427.653022633" lastFinishedPulling="2026-01-29 17:03:49.851038603 +0000 UTC m=+2512.759757865" observedRunningTime="2026-01-29 17:03:54.609993765 +0000 UTC m=+2517.518713057" watchObservedRunningTime="2026-01-29 17:03:54.616607507 +0000 UTC m=+2517.525326779" Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.613456 4886 generic.go:334] "Generic (PLEG): container finished" podID="6508ccc6-d71f-449d-bbe1-83270d005815" containerID="89f82f42c505d87726312a538c1469519937b08750e6ec80466cc82da8aa0837" exitCode=0 Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.613541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" event={"ID":"6508ccc6-d71f-449d-bbe1-83270d005815","Type":"ContainerDied","Data":"89f82f42c505d87726312a538c1469519937b08750e6ec80466cc82da8aa0837"} Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.619019 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"88c8ef15-a2b1-41df-8048-752b56d26653","Type":"ContainerStarted","Data":"10ebf425973cf40d094dde67b66d655c13aa2955f48ae0a6b4c41a153e79e60c"} Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.619669 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.622961 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhds2" event={"ID":"03dc141f-69cc-4cb4-af0b-acf85642b86e","Type":"ContainerStarted","Data":"948b2e4020afbef71d55c5d817cc8c2776b65ce432a68964ab8a4796a4e42a9e"} Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.623007 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xhds2" event={"ID":"03dc141f-69cc-4cb4-af0b-acf85642b86e","Type":"ContainerStarted","Data":"4846a08c86363260321db374e444fb76c2cb5ca480f29eedecee09311b820036"} Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.657885 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.356490074 podStartE2EDuration="1m27.657859076s" podCreationTimestamp="2026-01-29 17:02:28 +0000 UTC" firstStartedPulling="2026-01-29 17:02:29.131053442 +0000 UTC m=+2432.039772714" lastFinishedPulling="2026-01-29 17:03:54.432422444 +0000 UTC m=+2517.341141716" observedRunningTime="2026-01-29 17:03:55.653010363 +0000 UTC m=+2518.561729645" watchObservedRunningTime="2026-01-29 17:03:55.657859076 +0000 UTC m=+2518.566578348" Jan 29 17:03:55 crc kubenswrapper[4886]: I0129 17:03:55.673729 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xhds2" podStartSLOduration=76.705789184 podStartE2EDuration="1m22.673709423s" podCreationTimestamp="2026-01-29 17:02:33 +0000 UTC" firstStartedPulling="2026-01-29 17:03:47.463694631 +0000 UTC m=+2510.372413903" lastFinishedPulling="2026-01-29 17:03:53.43161487 +0000 UTC m=+2516.340334142" observedRunningTime="2026-01-29 17:03:55.671368148 +0000 UTC m=+2518.580087450" watchObservedRunningTime="2026-01-29 17:03:55.673709423 +0000 UTC m=+2518.582428705" Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.637059 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"39601bb5-f2bc-47a6-824a-609c207b963f","Type":"ContainerStarted","Data":"890150ec302223a8e2d169c0d885780677db4f8f7357b4039823615911ec1fdd"} Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.639693 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerStarted","Data":"583c2c73cc1b55ad9f4f022652302dc10ae77e94e45a693b0865ff8b717978ab"} Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.641415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7b015d0c-8672-450a-a079-965cc4ccd07f","Type":"ContainerStarted","Data":"9a0f61abb3b2a2a9f53d2b44347a687f4bb47ba68928cafd5016f226170d4374"} Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.645402 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" event={"ID":"6508ccc6-d71f-449d-bbe1-83270d005815","Type":"ContainerStarted","Data":"551d6bb92bd8b9f6b94728550021f0d9b88f84765724d42a9ae9096869fe7939"} Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.645589 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.648263 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"954d7d1e-fd92-4c83-87d8-87a1f866dbbe","Type":"ContainerStarted","Data":"01b438318caf5eaf9a57468dc2cc9bed9f702f5dc44dd9743a37737048ccabed"} Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.648304 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.648577 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.691755 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=73.169336337 podStartE2EDuration="1m20.69172979s" podCreationTimestamp="2026-01-29 17:02:36 +0000 UTC" firstStartedPulling="2026-01-29 17:03:48.397666665 +0000 UTC m=+2511.306385937" lastFinishedPulling="2026-01-29 17:03:55.920060118 +0000 UTC m=+2518.828779390" observedRunningTime="2026-01-29 17:03:56.687290278 +0000 UTC m=+2519.596009550" watchObservedRunningTime="2026-01-29 17:03:56.69172979 +0000 UTC m=+2519.600449062" Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.731197 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=72.248392743 podStartE2EDuration="1m20.731173157s" podCreationTimestamp="2026-01-29 17:02:36 +0000 UTC" firstStartedPulling="2026-01-29 17:03:47.420022468 +0000 UTC m=+2510.328741740" lastFinishedPulling="2026-01-29 17:03:55.902802882 +0000 UTC m=+2518.811522154" observedRunningTime="2026-01-29 17:03:56.718704313 +0000 UTC m=+2519.627423585" watchObservedRunningTime="2026-01-29 17:03:56.731173157 +0000 UTC m=+2519.639892429" Jan 29 17:03:56 crc kubenswrapper[4886]: I0129 17:03:56.746653 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" podStartSLOduration=-9223371944.10814 podStartE2EDuration="1m32.746634003s" podCreationTimestamp="2026-01-29 17:02:24 +0000 UTC" firstStartedPulling="2026-01-29 17:02:25.070696641 +0000 UTC m=+2427.979415913" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:03:56.746450697 +0000 UTC m=+2519.655169969" watchObservedRunningTime="2026-01-29 17:03:56.746634003 +0000 UTC m=+2519.655353275" Jan 29 17:03:57 crc kubenswrapper[4886]: I0129 17:03:57.588261 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 17:03:57 crc kubenswrapper[4886]: I0129 17:03:57.887962 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 17:03:58 crc kubenswrapper[4886]: I0129 17:03:58.587404 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 17:03:58 crc kubenswrapper[4886]: I0129 17:03:58.642318 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 17:03:58 crc kubenswrapper[4886]: I0129 17:03:58.664875 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"98bed306-aa68-4e53-affc-e04497079ccb","Type":"ContainerStarted","Data":"13269c792a56983291098b79dde6fcee3fc61558ea51917d6a60175381efc4fc"} Jan 29 17:03:58 crc kubenswrapper[4886]: I0129 17:03:58.887850 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 17:03:58 crc kubenswrapper[4886]: I0129 17:03:58.930372 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 17:03:59 crc kubenswrapper[4886]: I0129 17:03:59.124615 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:03:59 crc kubenswrapper[4886]: I0129 17:03:59.615265 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:03:59 crc kubenswrapper[4886]: E0129 17:03:59.615545 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:03:59 crc kubenswrapper[4886]: I0129 17:03:59.722298 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 17:03:59 crc kubenswrapper[4886]: I0129 17:03:59.730082 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.004275 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bqbqx"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.005194 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" containerName="dnsmasq-dns" containerID="cri-o://551d6bb92bd8b9f6b94728550021f0d9b88f84765724d42a9ae9096869fe7939" gracePeriod=10 Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.040906 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-6lgfs"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.042545 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.046643 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.055012 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-6lgfs"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.148654 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.148975 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.149095 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk4z9\" (UniqueName: \"kubernetes.io/projected/c05aff31-e011-4872-80bf-18f1b32a16e6-kube-api-access-sk4z9\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.149131 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-config\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.173947 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6f8zt"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.175648 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.177837 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.185409 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6f8zt"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250582 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ff160c34-86ad-4048-9c67-2071e6c38373-ovs-rundir\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250658 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmj5h\" (UniqueName: \"kubernetes.io/projected/ff160c34-86ad-4048-9c67-2071e6c38373-kube-api-access-pmj5h\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4z9\" (UniqueName: \"kubernetes.io/projected/c05aff31-e011-4872-80bf-18f1b32a16e6-kube-api-access-sk4z9\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ff160c34-86ad-4048-9c67-2071e6c38373-ovn-rundir\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff160c34-86ad-4048-9c67-2071e6c38373-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250846 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff160c34-86ad-4048-9c67-2071e6c38373-combined-ca-bundle\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-config\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff160c34-86ad-4048-9c67-2071e6c38373-config\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.250990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.251080 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.251960 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-config\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.260679 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-6lgfs"] Jan 29 17:04:00 crc kubenswrapper[4886]: E0129 17:04:00.261434 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dns-svc kube-api-access-sk4z9 ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" podUID="c05aff31-e011-4872-80bf-18f1b32a16e6" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.264817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.264866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.292354 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk4z9\" (UniqueName: \"kubernetes.io/projected/c05aff31-e011-4872-80bf-18f1b32a16e6-kube-api-access-sk4z9\") pod \"dnsmasq-dns-7f896c8c65-6lgfs\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.297253 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.299003 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.301115 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.301397 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.301556 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.301662 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-87p4g" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.312785 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-29gw9"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.315049 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.320209 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.334713 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.355970 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc04c928-b93c-49a3-a653-f82b5e686da5-scripts\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356031 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc04c928-b93c-49a3-a653-f82b5e686da5-config\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356089 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356117 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqzk\" (UniqueName: \"kubernetes.io/projected/dc04c928-b93c-49a3-a653-f82b5e686da5-kube-api-access-8jqzk\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356162 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356205 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc04c928-b93c-49a3-a653-f82b5e686da5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356228 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ff160c34-86ad-4048-9c67-2071e6c38373-ovs-rundir\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmj5h\" (UniqueName: \"kubernetes.io/projected/ff160c34-86ad-4048-9c67-2071e6c38373-kube-api-access-pmj5h\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ff160c34-86ad-4048-9c67-2071e6c38373-ovn-rundir\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff160c34-86ad-4048-9c67-2071e6c38373-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356425 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff160c34-86ad-4048-9c67-2071e6c38373-combined-ca-bundle\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.356461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff160c34-86ad-4048-9c67-2071e6c38373-config\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.357571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ff160c34-86ad-4048-9c67-2071e6c38373-ovs-rundir\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.358077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ff160c34-86ad-4048-9c67-2071e6c38373-ovn-rundir\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.368018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff160c34-86ad-4048-9c67-2071e6c38373-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.376438 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff160c34-86ad-4048-9c67-2071e6c38373-config\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.387088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff160c34-86ad-4048-9c67-2071e6c38373-combined-ca-bundle\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.416199 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-29gw9"] Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.420851 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmj5h\" (UniqueName: \"kubernetes.io/projected/ff160c34-86ad-4048-9c67-2071e6c38373-kube-api-access-pmj5h\") pod \"ovn-controller-metrics-6f8zt\" (UID: \"ff160c34-86ad-4048-9c67-2071e6c38373\") " pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463475 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc04c928-b93c-49a3-a653-f82b5e686da5-config\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463564 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gl6\" (UniqueName: \"kubernetes.io/projected/4ef7b166-c078-4530-b05b-ae3e44088122-kube-api-access-h5gl6\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463583 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jqzk\" (UniqueName: \"kubernetes.io/projected/dc04c928-b93c-49a3-a653-f82b5e686da5-kube-api-access-8jqzk\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463633 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463662 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463680 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463735 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc04c928-b93c-49a3-a653-f82b5e686da5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463761 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463833 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-config\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463849 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.463901 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc04c928-b93c-49a3-a653-f82b5e686da5-scripts\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.464883 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc04c928-b93c-49a3-a653-f82b5e686da5-scripts\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.465421 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc04c928-b93c-49a3-a653-f82b5e686da5-config\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.466567 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc04c928-b93c-49a3-a653-f82b5e686da5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.485086 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.495637 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.496516 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc04c928-b93c-49a3-a653-f82b5e686da5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.509601 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jqzk\" (UniqueName: \"kubernetes.io/projected/dc04c928-b93c-49a3-a653-f82b5e686da5-kube-api-access-8jqzk\") pod \"ovn-northd-0\" (UID: \"dc04c928-b93c-49a3-a653-f82b5e686da5\") " pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.542349 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6f8zt" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.566345 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-config\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.566383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.566464 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5gl6\" (UniqueName: \"kubernetes.io/projected/4ef7b166-c078-4530-b05b-ae3e44088122-kube-api-access-h5gl6\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.566509 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.566529 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.567447 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.568007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-config\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.568353 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.569512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.590166 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.627167 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5gl6\" (UniqueName: \"kubernetes.io/projected/4ef7b166-c078-4530-b05b-ae3e44088122-kube-api-access-h5gl6\") pod \"dnsmasq-dns-86db49b7ff-29gw9\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.728438 4886 generic.go:334] "Generic (PLEG): container finished" podID="6508ccc6-d71f-449d-bbe1-83270d005815" containerID="551d6bb92bd8b9f6b94728550021f0d9b88f84765724d42a9ae9096869fe7939" exitCode=0 Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.728975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" event={"ID":"6508ccc6-d71f-449d-bbe1-83270d005815","Type":"ContainerDied","Data":"551d6bb92bd8b9f6b94728550021f0d9b88f84765724d42a9ae9096869fe7939"} Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.729094 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.756266 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.756800 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.878838 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-dns-svc\") pod \"c05aff31-e011-4872-80bf-18f1b32a16e6\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879208 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk4z9\" (UniqueName: \"kubernetes.io/projected/c05aff31-e011-4872-80bf-18f1b32a16e6-kube-api-access-sk4z9\") pod \"c05aff31-e011-4872-80bf-18f1b32a16e6\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879289 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-config\") pod \"c05aff31-e011-4872-80bf-18f1b32a16e6\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879336 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-ovsdbserver-sb\") pod \"c05aff31-e011-4872-80bf-18f1b32a16e6\" (UID: \"c05aff31-e011-4872-80bf-18f1b32a16e6\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879396 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-config\") pod \"6508ccc6-d71f-449d-bbe1-83270d005815\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879404 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c05aff31-e011-4872-80bf-18f1b32a16e6" (UID: "c05aff31-e011-4872-80bf-18f1b32a16e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879423 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-dns-svc\") pod \"6508ccc6-d71f-449d-bbe1-83270d005815\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879496 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb44s\" (UniqueName: \"kubernetes.io/projected/6508ccc6-d71f-449d-bbe1-83270d005815-kube-api-access-kb44s\") pod \"6508ccc6-d71f-449d-bbe1-83270d005815\" (UID: \"6508ccc6-d71f-449d-bbe1-83270d005815\") " Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.879670 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-config" (OuterVolumeSpecName: "config") pod "c05aff31-e011-4872-80bf-18f1b32a16e6" (UID: "c05aff31-e011-4872-80bf-18f1b32a16e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.880264 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.880284 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.884700 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05aff31-e011-4872-80bf-18f1b32a16e6-kube-api-access-sk4z9" (OuterVolumeSpecName: "kube-api-access-sk4z9") pod "c05aff31-e011-4872-80bf-18f1b32a16e6" (UID: "c05aff31-e011-4872-80bf-18f1b32a16e6"). InnerVolumeSpecName "kube-api-access-sk4z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.887237 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c05aff31-e011-4872-80bf-18f1b32a16e6" (UID: "c05aff31-e011-4872-80bf-18f1b32a16e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.890296 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6508ccc6-d71f-449d-bbe1-83270d005815-kube-api-access-kb44s" (OuterVolumeSpecName: "kube-api-access-kb44s") pod "6508ccc6-d71f-449d-bbe1-83270d005815" (UID: "6508ccc6-d71f-449d-bbe1-83270d005815"). InnerVolumeSpecName "kube-api-access-kb44s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.915911 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.936148 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6508ccc6-d71f-449d-bbe1-83270d005815" (UID: "6508ccc6-d71f-449d-bbe1-83270d005815"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.950554 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-config" (OuterVolumeSpecName: "config") pod "6508ccc6-d71f-449d-bbe1-83270d005815" (UID: "6508ccc6-d71f-449d-bbe1-83270d005815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.981824 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.981855 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6508ccc6-d71f-449d-bbe1-83270d005815-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.981866 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb44s\" (UniqueName: \"kubernetes.io/projected/6508ccc6-d71f-449d-bbe1-83270d005815-kube-api-access-kb44s\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.981876 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk4z9\" (UniqueName: \"kubernetes.io/projected/c05aff31-e011-4872-80bf-18f1b32a16e6-kube-api-access-sk4z9\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:00 crc kubenswrapper[4886]: I0129 17:04:00.981885 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c05aff31-e011-4872-80bf-18f1b32a16e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.247204 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6f8zt"] Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.255379 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.470911 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-29gw9"] Jan 29 17:04:01 crc kubenswrapper[4886]: W0129 17:04:01.485149 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ef7b166_c078_4530_b05b_ae3e44088122.slice/crio-b30007dc7ac0cb559fa26a9b1b3904c3d91b03c66e5d4e617cb72bf920854daa WatchSource:0}: Error finding container b30007dc7ac0cb559fa26a9b1b3904c3d91b03c66e5d4e617cb72bf920854daa: Status 404 returned error can't find the container with id b30007dc7ac0cb559fa26a9b1b3904c3d91b03c66e5d4e617cb72bf920854daa Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.738224 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6f8zt" event={"ID":"ff160c34-86ad-4048-9c67-2071e6c38373","Type":"ContainerStarted","Data":"691ff7220e1361142913343ee9d06191daae7edcbc017e98673318e7c4dcf180"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.738281 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6f8zt" event={"ID":"ff160c34-86ad-4048-9c67-2071e6c38373","Type":"ContainerStarted","Data":"3f6d99f29803aa2336fff1711f2e7466d9a294ec38bb78c9a79953dda4e63501"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.740361 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.740361 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bqbqx" event={"ID":"6508ccc6-d71f-449d-bbe1-83270d005815","Type":"ContainerDied","Data":"3cb5dbf55000d2d62fd9df0707aa0b2ae3790c985165faca182a19e1e38e6908"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.740508 4886 scope.go:117] "RemoveContainer" containerID="551d6bb92bd8b9f6b94728550021f0d9b88f84765724d42a9ae9096869fe7939" Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.742395 4886 generic.go:334] "Generic (PLEG): container finished" podID="954d7d1e-fd92-4c83-87d8-87a1f866dbbe" containerID="01b438318caf5eaf9a57468dc2cc9bed9f702f5dc44dd9743a37737048ccabed" exitCode=0 Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.742450 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"954d7d1e-fd92-4c83-87d8-87a1f866dbbe","Type":"ContainerDied","Data":"01b438318caf5eaf9a57468dc2cc9bed9f702f5dc44dd9743a37737048ccabed"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.744678 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc04c928-b93c-49a3-a653-f82b5e686da5","Type":"ContainerStarted","Data":"44432bb9efbb5eec2088da4bed39ca91585697f30ae31fbcae52c1a9fa8c6ba9"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.748297 4886 generic.go:334] "Generic (PLEG): container finished" podID="4ef7b166-c078-4530-b05b-ae3e44088122" containerID="cbbe07486135ddfe120920c1f4f9ccadece896cbebac702a4fee9f0d2022f4db" exitCode=0 Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.748378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-6lgfs" Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.748494 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" event={"ID":"4ef7b166-c078-4530-b05b-ae3e44088122","Type":"ContainerDied","Data":"cbbe07486135ddfe120920c1f4f9ccadece896cbebac702a4fee9f0d2022f4db"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.748548 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" event={"ID":"4ef7b166-c078-4530-b05b-ae3e44088122","Type":"ContainerStarted","Data":"b30007dc7ac0cb559fa26a9b1b3904c3d91b03c66e5d4e617cb72bf920854daa"} Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.801936 4886 scope.go:117] "RemoveContainer" containerID="89f82f42c505d87726312a538c1469519937b08750e6ec80466cc82da8aa0837" Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.826715 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6f8zt" podStartSLOduration=1.826695419 podStartE2EDuration="1.826695419s" podCreationTimestamp="2026-01-29 17:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:01.756438424 +0000 UTC m=+2524.665157696" watchObservedRunningTime="2026-01-29 17:04:01.826695419 +0000 UTC m=+2524.735414691" Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.930085 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-6lgfs"] Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.938611 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-6lgfs"] Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.946078 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bqbqx"] Jan 29 17:04:01 crc kubenswrapper[4886]: I0129 17:04:01.954082 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bqbqx"] Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.630913 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" path="/var/lib/kubelet/pods/6508ccc6-d71f-449d-bbe1-83270d005815/volumes" Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.632342 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c05aff31-e011-4872-80bf-18f1b32a16e6" path="/var/lib/kubelet/pods/c05aff31-e011-4872-80bf-18f1b32a16e6/volumes" Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.771077 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"954d7d1e-fd92-4c83-87d8-87a1f866dbbe","Type":"ContainerStarted","Data":"49bc2884b26abe4f9087c468400ed26f82e277abb56ff1ac1083e5b7f95edffe"} Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.774235 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" event={"ID":"4ef7b166-c078-4530-b05b-ae3e44088122","Type":"ContainerStarted","Data":"e0d2fbb581e1f1576641f1d25760b3a9a9b2fc1c9e7db710f6875c72957b1c0b"} Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.774614 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.776740 4886 generic.go:334] "Generic (PLEG): container finished" podID="98bed306-aa68-4e53-affc-e04497079ccb" containerID="13269c792a56983291098b79dde6fcee3fc61558ea51917d6a60175381efc4fc" exitCode=0 Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.777165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"98bed306-aa68-4e53-affc-e04497079ccb","Type":"ContainerDied","Data":"13269c792a56983291098b79dde6fcee3fc61558ea51917d6a60175381efc4fc"} Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.796386 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.955985326 podStartE2EDuration="1m36.796368756s" podCreationTimestamp="2026-01-29 17:02:26 +0000 UTC" firstStartedPulling="2026-01-29 17:02:29.422604692 +0000 UTC m=+2432.331323964" lastFinishedPulling="2026-01-29 17:03:56.262988122 +0000 UTC m=+2519.171707394" observedRunningTime="2026-01-29 17:04:02.793775154 +0000 UTC m=+2525.702494426" watchObservedRunningTime="2026-01-29 17:04:02.796368756 +0000 UTC m=+2525.705088028" Jan 29 17:04:02 crc kubenswrapper[4886]: I0129 17:04:02.822214 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" podStartSLOduration=2.822196007 podStartE2EDuration="2.822196007s" podCreationTimestamp="2026-01-29 17:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:02.815572455 +0000 UTC m=+2525.724291737" watchObservedRunningTime="2026-01-29 17:04:02.822196007 +0000 UTC m=+2525.730915269" Jan 29 17:04:03 crc kubenswrapper[4886]: I0129 17:04:03.496312 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 17:04:03 crc kubenswrapper[4886]: I0129 17:04:03.793465 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"98bed306-aa68-4e53-affc-e04497079ccb","Type":"ContainerStarted","Data":"5705babd04f038e45524f2765a20c44405227f6554f54075ed01b05809eea45e"} Jan 29 17:04:03 crc kubenswrapper[4886]: I0129 17:04:03.795124 4886 generic.go:334] "Generic (PLEG): container finished" podID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerID="583c2c73cc1b55ad9f4f022652302dc10ae77e94e45a693b0865ff8b717978ab" exitCode=0 Jan 29 17:04:03 crc kubenswrapper[4886]: I0129 17:04:03.795174 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerDied","Data":"583c2c73cc1b55ad9f4f022652302dc10ae77e94e45a693b0865ff8b717978ab"} Jan 29 17:04:03 crc kubenswrapper[4886]: I0129 17:04:03.832746 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371938.022049 podStartE2EDuration="1m38.832727299s" podCreationTimestamp="2026-01-29 17:02:25 +0000 UTC" firstStartedPulling="2026-01-29 17:02:27.530602762 +0000 UTC m=+2430.439322034" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:03.827656519 +0000 UTC m=+2526.736375801" watchObservedRunningTime="2026-01-29 17:04:03.832727299 +0000 UTC m=+2526.741446571" Jan 29 17:04:06 crc kubenswrapper[4886]: I0129 17:04:06.823583 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 17:04:06 crc kubenswrapper[4886]: I0129 17:04:06.823969 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 17:04:07 crc kubenswrapper[4886]: I0129 17:04:07.996537 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7d44f9f6d-wvkcd" podUID="d7eb0acf-dfc4-4c24-8231-bfae5b620653" containerName="console" containerID="cri-o://83d754bde6259c4ef4756a1b0a86efc202f6d81cccfa70e563b1ad9cae41b68f" gracePeriod=15 Jan 29 17:04:08 crc kubenswrapper[4886]: I0129 17:04:08.459633 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 17:04:08 crc kubenswrapper[4886]: I0129 17:04:08.459684 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 17:04:08 crc kubenswrapper[4886]: I0129 17:04:08.839072 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7d44f9f6d-wvkcd_d7eb0acf-dfc4-4c24-8231-bfae5b620653/console/0.log" Jan 29 17:04:08 crc kubenswrapper[4886]: I0129 17:04:08.839320 4886 generic.go:334] "Generic (PLEG): container finished" podID="d7eb0acf-dfc4-4c24-8231-bfae5b620653" containerID="83d754bde6259c4ef4756a1b0a86efc202f6d81cccfa70e563b1ad9cae41b68f" exitCode=2 Jan 29 17:04:08 crc kubenswrapper[4886]: I0129 17:04:08.839360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d44f9f6d-wvkcd" event={"ID":"d7eb0acf-dfc4-4c24-8231-bfae5b620653","Type":"ContainerDied","Data":"83d754bde6259c4ef4756a1b0a86efc202f6d81cccfa70e563b1ad9cae41b68f"} Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.491249 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7d44f9f6d-wvkcd_d7eb0acf-dfc4-4c24-8231-bfae5b620653/console/0.log" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.492614 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.647744 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-config\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.647915 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-trusted-ca-bundle\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.647938 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-oauth-serving-cert\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.647980 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-serving-cert\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648017 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-oauth-config\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648039 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt776\" (UniqueName: \"kubernetes.io/projected/d7eb0acf-dfc4-4c24-8231-bfae5b620653-kube-api-access-vt776\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648140 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-service-ca\") pod \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\" (UID: \"d7eb0acf-dfc4-4c24-8231-bfae5b620653\") " Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648478 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648500 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648549 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-config" (OuterVolumeSpecName: "console-config") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.648965 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.653205 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.654044 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7eb0acf-dfc4-4c24-8231-bfae5b620653-kube-api-access-vt776" (OuterVolumeSpecName: "kube-api-access-vt776") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "kube-api-access-vt776". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.654062 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d7eb0acf-dfc4-4c24-8231-bfae5b620653" (UID: "d7eb0acf-dfc4-4c24-8231-bfae5b620653"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750907 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt776\" (UniqueName: \"kubernetes.io/projected/d7eb0acf-dfc4-4c24-8231-bfae5b620653-kube-api-access-vt776\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750944 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750954 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750964 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750972 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7eb0acf-dfc4-4c24-8231-bfae5b620653-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750980 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.750989 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7eb0acf-dfc4-4c24-8231-bfae5b620653-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.851768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc04c928-b93c-49a3-a653-f82b5e686da5","Type":"ContainerStarted","Data":"299f1c944b5c2254f62c4b9d1ad7c85c5444476239d2e24312d2b87d231b97eb"} Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.853923 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7d44f9f6d-wvkcd_d7eb0acf-dfc4-4c24-8231-bfae5b620653/console/0.log" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.853974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d44f9f6d-wvkcd" event={"ID":"d7eb0acf-dfc4-4c24-8231-bfae5b620653","Type":"ContainerDied","Data":"2dde3f8777f56361bbc961c320b3499545e524fdb56d2e7e1762b3c549f1e8ca"} Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.853999 4886 scope.go:117] "RemoveContainer" containerID="83d754bde6259c4ef4756a1b0a86efc202f6d81cccfa70e563b1ad9cae41b68f" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.854095 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d44f9f6d-wvkcd" Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.890766 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7d44f9f6d-wvkcd"] Jan 29 17:04:09 crc kubenswrapper[4886]: I0129 17:04:09.902743 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7d44f9f6d-wvkcd"] Jan 29 17:04:10 crc kubenswrapper[4886]: E0129 17:04:10.012000 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7eb0acf_dfc4_4c24_8231_bfae5b620653.slice/crio-2dde3f8777f56361bbc961c320b3499545e524fdb56d2e7e1762b3c549f1e8ca\": RecentStats: unable to find data in memory cache]" Jan 29 17:04:10 crc kubenswrapper[4886]: I0129 17:04:10.630379 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7eb0acf-dfc4-4c24-8231-bfae5b620653" path="/var/lib/kubelet/pods/d7eb0acf-dfc4-4c24-8231-bfae5b620653/volumes" Jan 29 17:04:10 crc kubenswrapper[4886]: I0129 17:04:10.871703 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc04c928-b93c-49a3-a653-f82b5e686da5","Type":"ContainerStarted","Data":"bc05d345a8c98d624229f73d9cd80f1cb6f8add35043ec8de2ca7a9a4647850e"} Jan 29 17:04:10 crc kubenswrapper[4886]: I0129 17:04:10.873420 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 17:04:10 crc kubenswrapper[4886]: I0129 17:04:10.910013 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.927404395 podStartE2EDuration="10.909991702s" podCreationTimestamp="2026-01-29 17:04:00 +0000 UTC" firstStartedPulling="2026-01-29 17:04:01.292130946 +0000 UTC m=+2524.200850228" lastFinishedPulling="2026-01-29 17:04:09.274718263 +0000 UTC m=+2532.183437535" observedRunningTime="2026-01-29 17:04:10.891231415 +0000 UTC m=+2533.799950687" watchObservedRunningTime="2026-01-29 17:04:10.909991702 +0000 UTC m=+2533.818710984" Jan 29 17:04:10 crc kubenswrapper[4886]: I0129 17:04:10.919767 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.078565 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tn5pt"] Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.078926 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" containerName="dnsmasq-dns" containerID="cri-o://85f248c363891313b6dfd3563ffece575be09f0a7b8fb96dd58a65634816d1bc" gracePeriod=10 Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.116587 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-t8rs7"] Jan 29 17:04:11 crc kubenswrapper[4886]: E0129 17:04:11.117050 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" containerName="init" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.117075 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" containerName="init" Jan 29 17:04:11 crc kubenswrapper[4886]: E0129 17:04:11.117108 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7eb0acf-dfc4-4c24-8231-bfae5b620653" containerName="console" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.117116 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7eb0acf-dfc4-4c24-8231-bfae5b620653" containerName="console" Jan 29 17:04:11 crc kubenswrapper[4886]: E0129 17:04:11.117128 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" containerName="dnsmasq-dns" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.117136 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" containerName="dnsmasq-dns" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.117402 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7eb0acf-dfc4-4c24-8231-bfae5b620653" containerName="console" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.117430 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6508ccc6-d71f-449d-bbe1-83270d005815" containerName="dnsmasq-dns" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.118774 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.136408 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t8rs7"] Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.191609 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcfr\" (UniqueName: \"kubernetes.io/projected/eb212bbc-3071-4fda-968d-b6d3f19996ee-kube-api-access-czcfr\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.191654 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-config\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.191752 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.191777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.191814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-dns-svc\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.295795 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.295860 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.295904 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-dns-svc\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.295967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czcfr\" (UniqueName: \"kubernetes.io/projected/eb212bbc-3071-4fda-968d-b6d3f19996ee-kube-api-access-czcfr\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.295987 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-config\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.296772 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-config\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.297450 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.297924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.298752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-dns-svc\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.352396 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcfr\" (UniqueName: \"kubernetes.io/projected/eb212bbc-3071-4fda-968d-b6d3f19996ee-kube-api-access-czcfr\") pod \"dnsmasq-dns-698758b865-t8rs7\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.505976 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.825011 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.885225 4886 generic.go:334] "Generic (PLEG): container finished" podID="3748c627-3deb-4b89-acd3-2269f42ba343" containerID="85f248c363891313b6dfd3563ffece575be09f0a7b8fb96dd58a65634816d1bc" exitCode=0 Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.885335 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" event={"ID":"3748c627-3deb-4b89-acd3-2269f42ba343","Type":"ContainerDied","Data":"85f248c363891313b6dfd3563ffece575be09f0a7b8fb96dd58a65634816d1bc"} Jan 29 17:04:11 crc kubenswrapper[4886]: I0129 17:04:11.948674 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.245497 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.252177 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.255514 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.255541 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.255700 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-l9zkf" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.258692 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.270977 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.423415 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-88051746-028d-43a7-b95b-e788ae0f16c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88051746-028d-43a7-b95b-e788ae0f16c4\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.423491 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwc7\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-kube-api-access-5pwc7\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.423547 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.423625 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-cache\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.423670 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.423701 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-lock\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525218 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-cache\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525315 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-lock\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525451 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-88051746-028d-43a7-b95b-e788ae0f16c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88051746-028d-43a7-b95b-e788ae0f16c4\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwc7\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-kube-api-access-5pwc7\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525536 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-cache\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.525984 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-lock\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: E0129 17:04:12.526110 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:12 crc kubenswrapper[4886]: E0129 17:04:12.526133 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:12 crc kubenswrapper[4886]: E0129 17:04:12.526172 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:04:13.026156575 +0000 UTC m=+2535.934875847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.528456 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.528488 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-88051746-028d-43a7-b95b-e788ae0f16c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88051746-028d-43a7-b95b-e788ae0f16c4\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/426a48f8948db7cb55561ec1b18122536ab9cc087c8ed2a6c2cec3e8d4976eec/globalmount\"" pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.534989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.544435 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwc7\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-kube-api-access-5pwc7\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.602128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-88051746-028d-43a7-b95b-e788ae0f16c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88051746-028d-43a7-b95b-e788ae0f16c4\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.902748 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-r28c8"] Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.913464 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.923000 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.923295 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.923728 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.960066 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dba0c99a-0f14-42bd-8822-ee79fc73ee41","Type":"ContainerStarted","Data":"27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2"} Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.967586 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:04:12 crc kubenswrapper[4886]: I0129 17:04:12.990172 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-r28c8"] Jan 29 17:04:12 crc kubenswrapper[4886]: E0129 17:04:12.993714 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-x6r5m ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-r28c8" podUID="60ecf496-dd57-4ed4-9bbc-2e40f9df4447" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.024260 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-s7294"] Jan 29 17:04:13 crc kubenswrapper[4886]: E0129 17:04:13.024790 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" containerName="init" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.024811 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" containerName="init" Jan 29 17:04:13 crc kubenswrapper[4886]: E0129 17:04:13.024832 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" containerName="dnsmasq-dns" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.024838 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" containerName="dnsmasq-dns" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.025005 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" containerName="dnsmasq-dns" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.025714 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.051021 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s7294"] Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.061368 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6zcd\" (UniqueName: \"kubernetes.io/projected/3748c627-3deb-4b89-acd3-2269f42ba343-kube-api-access-x6zcd\") pod \"3748c627-3deb-4b89-acd3-2269f42ba343\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.061426 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-dns-svc\") pod \"3748c627-3deb-4b89-acd3-2269f42ba343\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.061750 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-config\") pod \"3748c627-3deb-4b89-acd3-2269f42ba343\" (UID: \"3748c627-3deb-4b89-acd3-2269f42ba343\") " Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.062512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.062548 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-ring-data-devices\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.062642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-scripts\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: E0129 17:04:13.062808 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:13 crc kubenswrapper[4886]: E0129 17:04:13.062829 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:13 crc kubenswrapper[4886]: E0129 17:04:13.062892 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:04:14.062865877 +0000 UTC m=+2536.971585149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.062943 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-dispersionconf\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.063098 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-swiftconf\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.063176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-etc-swift\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.063280 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6r5m\" (UniqueName: \"kubernetes.io/projected/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-kube-api-access-x6r5m\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.063318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-combined-ca-bundle\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.069800 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3748c627-3deb-4b89-acd3-2269f42ba343-kube-api-access-x6zcd" (OuterVolumeSpecName: "kube-api-access-x6zcd") pod "3748c627-3deb-4b89-acd3-2269f42ba343" (UID: "3748c627-3deb-4b89-acd3-2269f42ba343"). InnerVolumeSpecName "kube-api-access-x6zcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.078911 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-r28c8"] Jan 29 17:04:13 crc kubenswrapper[4886]: W0129 17:04:13.100354 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb212bbc_3071_4fda_968d_b6d3f19996ee.slice/crio-da2d61dccf59424cc14b54a614d36ae066f9a9d76b8f120a8702b08ed1b7f949 WatchSource:0}: Error finding container da2d61dccf59424cc14b54a614d36ae066f9a9d76b8f120a8702b08ed1b7f949: Status 404 returned error can't find the container with id da2d61dccf59424cc14b54a614d36ae066f9a9d76b8f120a8702b08ed1b7f949 Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.122304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t8rs7"] Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.164905 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3748c627-3deb-4b89-acd3-2269f42ba343" (UID: "3748c627-3deb-4b89-acd3-2269f42ba343"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.168961 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-config" (OuterVolumeSpecName: "config") pod "3748c627-3deb-4b89-acd3-2269f42ba343" (UID: "3748c627-3deb-4b89-acd3-2269f42ba343"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173554 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-swiftconf\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173632 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-etc-swift\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173665 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6r5m\" (UniqueName: \"kubernetes.io/projected/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-kube-api-access-x6r5m\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173699 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-dispersionconf\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173717 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-combined-ca-bundle\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173740 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-scripts\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-ring-data-devices\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173790 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-scripts\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173897 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9gr\" (UniqueName: \"kubernetes.io/projected/ebccb3a0-d421-4c30-9201-43e9106e4006-kube-api-access-km9gr\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.173952 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-combined-ca-bundle\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-dispersionconf\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174051 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ebccb3a0-d421-4c30-9201-43e9106e4006-etc-swift\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174153 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-ring-data-devices\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174189 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-swiftconf\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174273 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174296 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6zcd\" (UniqueName: \"kubernetes.io/projected/3748c627-3deb-4b89-acd3-2269f42ba343-kube-api-access-x6zcd\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174312 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3748c627-3deb-4b89-acd3-2269f42ba343-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-scripts\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.174962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-etc-swift\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.175120 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-ring-data-devices\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.177707 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-swiftconf\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.178686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-dispersionconf\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.184990 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-combined-ca-bundle\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.194073 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6r5m\" (UniqueName: \"kubernetes.io/projected/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-kube-api-access-x6r5m\") pod \"swift-ring-rebalance-r28c8\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.275899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-dispersionconf\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.275953 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-scripts\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.276024 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km9gr\" (UniqueName: \"kubernetes.io/projected/ebccb3a0-d421-4c30-9201-43e9106e4006-kube-api-access-km9gr\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.276050 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-combined-ca-bundle\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.276091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ebccb3a0-d421-4c30-9201-43e9106e4006-etc-swift\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.276142 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-ring-data-devices\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.276161 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-swiftconf\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.276872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ebccb3a0-d421-4c30-9201-43e9106e4006-etc-swift\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.277026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-scripts\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.277388 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-ring-data-devices\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.279736 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-swiftconf\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.282710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-dispersionconf\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.285485 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-combined-ca-bundle\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.291653 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km9gr\" (UniqueName: \"kubernetes.io/projected/ebccb3a0-d421-4c30-9201-43e9106e4006-kube-api-access-km9gr\") pod \"swift-ring-rebalance-s7294\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.341471 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.619639 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:04:13 crc kubenswrapper[4886]: E0129 17:04:13.620202 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.815086 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s7294"] Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.978934 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.979039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tn5pt" event={"ID":"3748c627-3deb-4b89-acd3-2269f42ba343","Type":"ContainerDied","Data":"5ab6a774b30c4926836ad5d20a9d8ca3a61ba5556b7b5bbd72dc9a90a6ac1502"} Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.979644 4886 scope.go:117] "RemoveContainer" containerID="85f248c363891313b6dfd3563ffece575be09f0a7b8fb96dd58a65634816d1bc" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.982638 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7294" event={"ID":"ebccb3a0-d421-4c30-9201-43e9106e4006","Type":"ContainerStarted","Data":"b1f9445ba0ed2622eaf729acf0f6efe1278fbfe9cc96bab1babb0686d7460824"} Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.989052 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerID="71b921e8db9e8e747c69aeafc44470b62e0400a32e8c7e760d1d991c175cbc64" exitCode=0 Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.989143 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.989414 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t8rs7" event={"ID":"eb212bbc-3071-4fda-968d-b6d3f19996ee","Type":"ContainerDied","Data":"71b921e8db9e8e747c69aeafc44470b62e0400a32e8c7e760d1d991c175cbc64"} Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.989480 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t8rs7" event={"ID":"eb212bbc-3071-4fda-968d-b6d3f19996ee","Type":"ContainerStarted","Data":"da2d61dccf59424cc14b54a614d36ae066f9a9d76b8f120a8702b08ed1b7f949"} Jan 29 17:04:13 crc kubenswrapper[4886]: I0129 17:04:13.990547 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.017859 4886 scope.go:117] "RemoveContainer" containerID="fcac16ce7b565761d87666d9cf26f0b7bab43d40d9fedf5938d903160f00e164" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.041056 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=65.8879652 podStartE2EDuration="1m44.041035547s" podCreationTimestamp="2026-01-29 17:02:30 +0000 UTC" firstStartedPulling="2026-01-29 17:03:34.369983163 +0000 UTC m=+2497.278702435" lastFinishedPulling="2026-01-29 17:04:12.52305351 +0000 UTC m=+2535.431772782" observedRunningTime="2026-01-29 17:04:14.037239153 +0000 UTC m=+2536.945958415" watchObservedRunningTime="2026-01-29 17:04:14.041035547 +0000 UTC m=+2536.949754809" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.095173 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:14 crc kubenswrapper[4886]: E0129 17:04:14.095369 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:14 crc kubenswrapper[4886]: E0129 17:04:14.095387 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:14 crc kubenswrapper[4886]: E0129 17:04:14.095442 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:04:16.095424015 +0000 UTC m=+2539.004143287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.110188 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.178869 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tn5pt"] Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.186891 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tn5pt"] Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.196998 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-combined-ca-bundle\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197113 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-swiftconf\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197154 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6r5m\" (UniqueName: \"kubernetes.io/projected/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-kube-api-access-x6r5m\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197190 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-scripts\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197218 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-ring-data-devices\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197357 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-dispersionconf\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197412 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-etc-swift\") pod \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\" (UID: \"60ecf496-dd57-4ed4-9bbc-2e40f9df4447\") " Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197739 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-scripts" (OuterVolumeSpecName: "scripts") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197935 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.197944 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.198363 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.198380 4886 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.198392 4886 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.202898 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.203039 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-kube-api-access-x6r5m" (OuterVolumeSpecName: "kube-api-access-x6r5m") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "kube-api-access-x6r5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.203017 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.203156 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "60ecf496-dd57-4ed4-9bbc-2e40f9df4447" (UID: "60ecf496-dd57-4ed4-9bbc-2e40f9df4447"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.300659 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.300695 4886 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.300709 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6r5m\" (UniqueName: \"kubernetes.io/projected/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-kube-api-access-x6r5m\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.300722 4886 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60ecf496-dd57-4ed4-9bbc-2e40f9df4447-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:14 crc kubenswrapper[4886]: I0129 17:04:14.634492 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3748c627-3deb-4b89-acd3-2269f42ba343" path="/var/lib/kubelet/pods/3748c627-3deb-4b89-acd3-2269f42ba343/volumes" Jan 29 17:04:15 crc kubenswrapper[4886]: I0129 17:04:15.000477 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r28c8" Jan 29 17:04:15 crc kubenswrapper[4886]: I0129 17:04:15.047166 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-r28c8"] Jan 29 17:04:15 crc kubenswrapper[4886]: I0129 17:04:15.077477 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-r28c8"] Jan 29 17:04:16 crc kubenswrapper[4886]: I0129 17:04:16.143556 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:16 crc kubenswrapper[4886]: E0129 17:04:16.143749 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:16 crc kubenswrapper[4886]: E0129 17:04:16.143998 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:16 crc kubenswrapper[4886]: E0129 17:04:16.144050 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:04:20.144032799 +0000 UTC m=+2543.052752071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:16 crc kubenswrapper[4886]: I0129 17:04:16.638816 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ecf496-dd57-4ed4-9bbc-2e40f9df4447" path="/var/lib/kubelet/pods/60ecf496-dd57-4ed4-9bbc-2e40f9df4447/volumes" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.017805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t8rs7" event={"ID":"eb212bbc-3071-4fda-968d-b6d3f19996ee","Type":"ContainerStarted","Data":"54bdeb43a338f0b719b206ca212f50bc02c6d2592ec0ac66c6b8743631a3cf1b"} Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.043790 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v692m"] Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.045027 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.048051 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.060317 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v692m"] Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.168080 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-operator-scripts\") pod \"root-account-create-update-v692m\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.168148 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrkn\" (UniqueName: \"kubernetes.io/projected/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-kube-api-access-gjrkn\") pod \"root-account-create-update-v692m\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.270517 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-operator-scripts\") pod \"root-account-create-update-v692m\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.270577 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjrkn\" (UniqueName: \"kubernetes.io/projected/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-kube-api-access-gjrkn\") pod \"root-account-create-update-v692m\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.271710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-operator-scripts\") pod \"root-account-create-update-v692m\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.292098 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjrkn\" (UniqueName: \"kubernetes.io/projected/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-kube-api-access-gjrkn\") pod \"root-account-create-update-v692m\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.363222 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v692m" Jan 29 17:04:17 crc kubenswrapper[4886]: I0129 17:04:17.863835 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v692m"] Jan 29 17:04:17 crc kubenswrapper[4886]: W0129 17:04:17.865113 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a29ba47_9a94_492f_8abd_c01b04d0b3c1.slice/crio-64c66fbc90bf20316435457059ddb5ea811599c8b622e4c863e62edddb2ed230 WatchSource:0}: Error finding container 64c66fbc90bf20316435457059ddb5ea811599c8b622e4c863e62edddb2ed230: Status 404 returned error can't find the container with id 64c66fbc90bf20316435457059ddb5ea811599c8b622e4c863e62edddb2ed230 Jan 29 17:04:18 crc kubenswrapper[4886]: I0129 17:04:18.030969 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v692m" event={"ID":"7a29ba47-9a94-492f-8abd-c01b04d0b3c1","Type":"ContainerStarted","Data":"64c66fbc90bf20316435457059ddb5ea811599c8b622e4c863e62edddb2ed230"} Jan 29 17:04:18 crc kubenswrapper[4886]: I0129 17:04:18.031116 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:18 crc kubenswrapper[4886]: I0129 17:04:18.067078 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-t8rs7" podStartSLOduration=7.067056323 podStartE2EDuration="7.067056323s" podCreationTimestamp="2026-01-29 17:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:18.061213162 +0000 UTC m=+2540.969932434" watchObservedRunningTime="2026-01-29 17:04:18.067056323 +0000 UTC m=+2540.975775595" Jan 29 17:04:20 crc kubenswrapper[4886]: I0129 17:04:20.148940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:20 crc kubenswrapper[4886]: E0129 17:04:20.149187 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:20 crc kubenswrapper[4886]: E0129 17:04:20.149633 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:20 crc kubenswrapper[4886]: E0129 17:04:20.149694 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:04:28.149674113 +0000 UTC m=+2551.058393385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:20 crc kubenswrapper[4886]: I0129 17:04:20.656439 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 17:04:21 crc kubenswrapper[4886]: I0129 17:04:21.057045 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v692m" event={"ID":"7a29ba47-9a94-492f-8abd-c01b04d0b3c1","Type":"ContainerStarted","Data":"8d073617833fd03b3552145f85acbb902d34a0687d97b69de74b719dca519779"} Jan 29 17:04:21 crc kubenswrapper[4886]: I0129 17:04:21.078595 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-v692m" podStartSLOduration=4.078577416 podStartE2EDuration="4.078577416s" podCreationTimestamp="2026-01-29 17:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:21.076733716 +0000 UTC m=+2543.985452998" watchObservedRunningTime="2026-01-29 17:04:21.078577416 +0000 UTC m=+2543.987296688" Jan 29 17:04:21 crc kubenswrapper[4886]: I0129 17:04:21.107464 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 17:04:21 crc kubenswrapper[4886]: I0129 17:04:21.507976 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:04:21 crc kubenswrapper[4886]: I0129 17:04:21.598608 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-29gw9"] Jan 29 17:04:21 crc kubenswrapper[4886]: I0129 17:04:21.598853 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="dnsmasq-dns" containerID="cri-o://e0d2fbb581e1f1576641f1d25760b3a9a9b2fc1c9e7db710f6875c72957b1c0b" gracePeriod=10 Jan 29 17:04:23 crc kubenswrapper[4886]: I0129 17:04:23.076838 4886 generic.go:334] "Generic (PLEG): container finished" podID="4ef7b166-c078-4530-b05b-ae3e44088122" containerID="e0d2fbb581e1f1576641f1d25760b3a9a9b2fc1c9e7db710f6875c72957b1c0b" exitCode=0 Jan 29 17:04:23 crc kubenswrapper[4886]: I0129 17:04:23.076925 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" event={"ID":"4ef7b166-c078-4530-b05b-ae3e44088122","Type":"ContainerDied","Data":"e0d2fbb581e1f1576641f1d25760b3a9a9b2fc1c9e7db710f6875c72957b1c0b"} Jan 29 17:04:23 crc kubenswrapper[4886]: I0129 17:04:23.822171 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 17:04:23 crc kubenswrapper[4886]: I0129 17:04:23.916274 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.043405 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-b7d9p" podUID="544b4515-481c-47f1-acb6-ed332a3497d4" containerName="ovn-controller" probeResult="failure" output=< Jan 29 17:04:24 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 17:04:24 crc kubenswrapper[4886]: > Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.092043 4886 generic.go:334] "Generic (PLEG): container finished" podID="9d0db9ae-746b-419a-bc61-bf85645d2bff" containerID="90c62e1af999c12bd3cee48206c3c037d5e41331e61dd2c2d6e99f50a71acbba" exitCode=0 Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.092102 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9d0db9ae-746b-419a-bc61-bf85645d2bff","Type":"ContainerDied","Data":"90c62e1af999c12bd3cee48206c3c037d5e41331e61dd2c2d6e99f50a71acbba"} Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.096019 4886 generic.go:334] "Generic (PLEG): container finished" podID="842bfe4d-04ba-4143-9076-3033163c7b82" containerID="5c98fb62cf57fb19a685fed0c362721e82c04b5d528f5ad7579c1412f1f79e81" exitCode=0 Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.096089 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"842bfe4d-04ba-4143-9076-3033163c7b82","Type":"ContainerDied","Data":"5c98fb62cf57fb19a685fed0c362721e82c04b5d528f5ad7579c1412f1f79e81"} Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.108587 4886 generic.go:334] "Generic (PLEG): container finished" podID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" containerID="e164b2712bb12971248661528d0d661417a2f6869697cd179a3843bd4e2721f1" exitCode=0 Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.108661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10","Type":"ContainerDied","Data":"e164b2712bb12971248661528d0d661417a2f6869697cd179a3843bd4e2721f1"} Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.114197 4886 generic.go:334] "Generic (PLEG): container finished" podID="2b0be43b-8956-45aa-ad50-de9183b3fea3" containerID="121b418980e461ff82cc0059422b3aec6e494e5fd4c123ffbab962202999757c" exitCode=0 Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.115142 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b0be43b-8956-45aa-ad50-de9183b3fea3","Type":"ContainerDied","Data":"121b418980e461ff82cc0059422b3aec6e494e5fd4c123ffbab962202999757c"} Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.161463 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f0b5-account-create-update-8b8vz"] Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.163644 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.165705 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.171942 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.180479 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f0b5-account-create-update-8b8vz"] Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.245980 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29921ec8-f68f-4547-a2c0-d4d3f5de6960-operator-scripts\") pod \"glance-f0b5-account-create-update-8b8vz\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.246355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbwc\" (UniqueName: \"kubernetes.io/projected/29921ec8-f68f-4547-a2c0-d4d3f5de6960-kube-api-access-pxbwc\") pod \"glance-f0b5-account-create-update-8b8vz\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.352530 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29921ec8-f68f-4547-a2c0-d4d3f5de6960-operator-scripts\") pod \"glance-f0b5-account-create-update-8b8vz\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.352868 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxbwc\" (UniqueName: \"kubernetes.io/projected/29921ec8-f68f-4547-a2c0-d4d3f5de6960-kube-api-access-pxbwc\") pod \"glance-f0b5-account-create-update-8b8vz\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.353440 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29921ec8-f68f-4547-a2c0-d4d3f5de6960-operator-scripts\") pod \"glance-f0b5-account-create-update-8b8vz\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.408014 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxbwc\" (UniqueName: \"kubernetes.io/projected/29921ec8-f68f-4547-a2c0-d4d3f5de6960-kube-api-access-pxbwc\") pod \"glance-f0b5-account-create-update-8b8vz\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.503828 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.513484 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mdvpb"] Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.515534 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.538550 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mdvpb"] Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.560083 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-operator-scripts\") pod \"glance-db-create-mdvpb\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.560118 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2mjv\" (UniqueName: \"kubernetes.io/projected/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-kube-api-access-s2mjv\") pod \"glance-db-create-mdvpb\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.661658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-operator-scripts\") pod \"glance-db-create-mdvpb\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.661712 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2mjv\" (UniqueName: \"kubernetes.io/projected/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-kube-api-access-s2mjv\") pod \"glance-db-create-mdvpb\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.662566 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-operator-scripts\") pod \"glance-db-create-mdvpb\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.678483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2mjv\" (UniqueName: \"kubernetes.io/projected/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-kube-api-access-s2mjv\") pod \"glance-db-create-mdvpb\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:24 crc kubenswrapper[4886]: I0129 17:04:24.848821 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:25 crc kubenswrapper[4886]: I0129 17:04:25.135887 4886 generic.go:334] "Generic (PLEG): container finished" podID="7a29ba47-9a94-492f-8abd-c01b04d0b3c1" containerID="8d073617833fd03b3552145f85acbb902d34a0687d97b69de74b719dca519779" exitCode=0 Jan 29 17:04:25 crc kubenswrapper[4886]: I0129 17:04:25.135934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v692m" event={"ID":"7a29ba47-9a94-492f-8abd-c01b04d0b3c1","Type":"ContainerDied","Data":"8d073617833fd03b3552145f85acbb902d34a0687d97b69de74b719dca519779"} Jan 29 17:04:26 crc kubenswrapper[4886]: E0129 17:04:26.398178 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Jan 29 17:04:26 crc kubenswrapper[4886]: E0129 17:04:26.398991 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(ce7955a1-eb58-425a-872a-7ec102b8e090): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.531878 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.601063 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-nb\") pod \"4ef7b166-c078-4530-b05b-ae3e44088122\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.601137 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-sb\") pod \"4ef7b166-c078-4530-b05b-ae3e44088122\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.601188 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-dns-svc\") pod \"4ef7b166-c078-4530-b05b-ae3e44088122\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.601244 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-config\") pod \"4ef7b166-c078-4530-b05b-ae3e44088122\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.601359 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5gl6\" (UniqueName: \"kubernetes.io/projected/4ef7b166-c078-4530-b05b-ae3e44088122-kube-api-access-h5gl6\") pod \"4ef7b166-c078-4530-b05b-ae3e44088122\" (UID: \"4ef7b166-c078-4530-b05b-ae3e44088122\") " Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.607186 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef7b166-c078-4530-b05b-ae3e44088122-kube-api-access-h5gl6" (OuterVolumeSpecName: "kube-api-access-h5gl6") pod "4ef7b166-c078-4530-b05b-ae3e44088122" (UID: "4ef7b166-c078-4530-b05b-ae3e44088122"). InnerVolumeSpecName "kube-api-access-h5gl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.654268 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ef7b166-c078-4530-b05b-ae3e44088122" (UID: "4ef7b166-c078-4530-b05b-ae3e44088122"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.662284 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-config" (OuterVolumeSpecName: "config") pod "4ef7b166-c078-4530-b05b-ae3e44088122" (UID: "4ef7b166-c078-4530-b05b-ae3e44088122"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.677263 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ef7b166-c078-4530-b05b-ae3e44088122" (UID: "4ef7b166-c078-4530-b05b-ae3e44088122"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.697614 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ef7b166-c078-4530-b05b-ae3e44088122" (UID: "4ef7b166-c078-4530-b05b-ae3e44088122"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.703682 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.703718 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.703734 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.703747 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef7b166-c078-4530-b05b-ae3e44088122-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:26 crc kubenswrapper[4886]: I0129 17:04:26.703759 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5gl6\" (UniqueName: \"kubernetes.io/projected/4ef7b166-c078-4530-b05b-ae3e44088122-kube-api-access-h5gl6\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.154574 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" event={"ID":"4ef7b166-c078-4530-b05b-ae3e44088122","Type":"ContainerDied","Data":"b30007dc7ac0cb559fa26a9b1b3904c3d91b03c66e5d4e617cb72bf920854daa"} Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.154998 4886 scope.go:117] "RemoveContainer" containerID="e0d2fbb581e1f1576641f1d25760b3a9a9b2fc1c9e7db710f6875c72957b1c0b" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.154667 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.189272 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-29gw9"] Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.197576 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-29gw9"] Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.644744 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v692m" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.723855 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-operator-scripts\") pod \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.724016 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjrkn\" (UniqueName: \"kubernetes.io/projected/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-kube-api-access-gjrkn\") pod \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\" (UID: \"7a29ba47-9a94-492f-8abd-c01b04d0b3c1\") " Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.725083 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a29ba47-9a94-492f-8abd-c01b04d0b3c1" (UID: "7a29ba47-9a94-492f-8abd-c01b04d0b3c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.725716 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.744674 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-kube-api-access-gjrkn" (OuterVolumeSpecName: "kube-api-access-gjrkn") pod "7a29ba47-9a94-492f-8abd-c01b04d0b3c1" (UID: "7a29ba47-9a94-492f-8abd-c01b04d0b3c1"). InnerVolumeSpecName "kube-api-access-gjrkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:27 crc kubenswrapper[4886]: I0129 17:04:27.828759 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjrkn\" (UniqueName: \"kubernetes.io/projected/7a29ba47-9a94-492f-8abd-c01b04d0b3c1-kube-api-access-gjrkn\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.054225 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-sgspp"] Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.054865 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="init" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.054883 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="init" Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.054896 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a29ba47-9a94-492f-8abd-c01b04d0b3c1" containerName="mariadb-account-create-update" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.054904 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a29ba47-9a94-492f-8abd-c01b04d0b3c1" containerName="mariadb-account-create-update" Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.054922 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="dnsmasq-dns" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.054930 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="dnsmasq-dns" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.055192 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a29ba47-9a94-492f-8abd-c01b04d0b3c1" containerName="mariadb-account-create-update" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.055213 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="dnsmasq-dns" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.056193 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.063074 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-sgspp"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.140607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hc79\" (UniqueName: \"kubernetes.io/projected/b696cd6b-840b-4505-9010-114d223a90e9-kube-api-access-8hc79\") pod \"keystone-db-create-sgspp\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.140898 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b696cd6b-840b-4505-9010-114d223a90e9-operator-scripts\") pod \"keystone-db-create-sgspp\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.170385 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v692m" event={"ID":"7a29ba47-9a94-492f-8abd-c01b04d0b3c1","Type":"ContainerDied","Data":"64c66fbc90bf20316435457059ddb5ea811599c8b622e4c863e62edddb2ed230"} Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.170455 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c66fbc90bf20316435457059ddb5ea811599c8b622e4c863e62edddb2ed230" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.170553 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v692m" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.181553 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-00e3-account-create-update-5hhsj"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.183180 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.185878 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.205987 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-00e3-account-create-update-5hhsj"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.242870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b696cd6b-840b-4505-9010-114d223a90e9-operator-scripts\") pod \"keystone-db-create-sgspp\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.242951 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hc79\" (UniqueName: \"kubernetes.io/projected/b696cd6b-840b-4505-9010-114d223a90e9-kube-api-access-8hc79\") pod \"keystone-db-create-sgspp\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.243010 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47b2\" (UniqueName: \"kubernetes.io/projected/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-kube-api-access-m47b2\") pod \"keystone-00e3-account-create-update-5hhsj\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.243069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.243120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-operator-scripts\") pod \"keystone-00e3-account-create-update-5hhsj\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.243504 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.243541 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.243599 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:04:44.243578396 +0000 UTC m=+2567.152297688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.244453 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b696cd6b-840b-4505-9010-114d223a90e9-operator-scripts\") pod \"keystone-db-create-sgspp\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.289382 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hc79\" (UniqueName: \"kubernetes.io/projected/b696cd6b-840b-4505-9010-114d223a90e9-kube-api-access-8hc79\") pod \"keystone-db-create-sgspp\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.345437 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47b2\" (UniqueName: \"kubernetes.io/projected/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-kube-api-access-m47b2\") pod \"keystone-00e3-account-create-update-5hhsj\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.345535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-operator-scripts\") pod \"keystone-00e3-account-create-update-5hhsj\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.346236 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-operator-scripts\") pod \"keystone-00e3-account-create-update-5hhsj\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.386768 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4vq4n"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.388434 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.404983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47b2\" (UniqueName: \"kubernetes.io/projected/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-kube-api-access-m47b2\") pod \"keystone-00e3-account-create-update-5hhsj\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.410663 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4vq4n"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.445195 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.446596 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-operator-scripts\") pod \"placement-db-create-4vq4n\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.446790 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8547\" (UniqueName: \"kubernetes.io/projected/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-kube-api-access-n8547\") pod \"placement-db-create-4vq4n\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.505501 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d860-account-create-update-5kd66"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.506881 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.508190 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.509684 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.514590 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d860-account-create-update-5kd66"] Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.548756 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhjq\" (UniqueName: \"kubernetes.io/projected/66c16915-30cc-4a4f-81ff-4b82cf152968-kube-api-access-lzhjq\") pod \"placement-d860-account-create-update-5kd66\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.548862 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8547\" (UniqueName: \"kubernetes.io/projected/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-kube-api-access-n8547\") pod \"placement-db-create-4vq4n\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.549094 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-operator-scripts\") pod \"placement-db-create-4vq4n\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.549133 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c16915-30cc-4a4f-81ff-4b82cf152968-operator-scripts\") pod \"placement-d860-account-create-update-5kd66\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.549926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-operator-scripts\") pod \"placement-db-create-4vq4n\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.572982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8547\" (UniqueName: \"kubernetes.io/projected/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-kube-api-access-n8547\") pod \"placement-db-create-4vq4n\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.621128 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:04:28 crc kubenswrapper[4886]: E0129 17:04:28.621474 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.627960 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" path="/var/lib/kubelet/pods/4ef7b166-c078-4530-b05b-ae3e44088122/volumes" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.651125 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhjq\" (UniqueName: \"kubernetes.io/projected/66c16915-30cc-4a4f-81ff-4b82cf152968-kube-api-access-lzhjq\") pod \"placement-d860-account-create-update-5kd66\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.651446 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c16915-30cc-4a4f-81ff-4b82cf152968-operator-scripts\") pod \"placement-d860-account-create-update-5kd66\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.652776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c16915-30cc-4a4f-81ff-4b82cf152968-operator-scripts\") pod \"placement-d860-account-create-update-5kd66\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.669053 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhjq\" (UniqueName: \"kubernetes.io/projected/66c16915-30cc-4a4f-81ff-4b82cf152968-kube-api-access-lzhjq\") pod \"placement-d860-account-create-update-5kd66\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.753246 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:28 crc kubenswrapper[4886]: I0129 17:04:28.832677 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.045106 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-b7d9p" podUID="544b4515-481c-47f1-acb6-ed332a3497d4" containerName="ovn-controller" probeResult="failure" output=< Jan 29 17:04:29 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 17:04:29 crc kubenswrapper[4886]: > Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.150116 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xhds2" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.385129 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b7d9p-config-fbd7w"] Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.387230 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.391217 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.402781 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7d9p-config-fbd7w"] Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.466694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-additional-scripts\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.466741 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run-ovn\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.466785 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-log-ovn\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.466902 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7nwl\" (UniqueName: \"kubernetes.io/projected/e489f203-c94a-4bbb-b22a-750bec963d77-kube-api-access-j7nwl\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.466969 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-scripts\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.467098 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.577383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-log-ovn\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.577480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7nwl\" (UniqueName: \"kubernetes.io/projected/e489f203-c94a-4bbb-b22a-750bec963d77-kube-api-access-j7nwl\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.577981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-log-ovn\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.578597 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-scripts\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.579168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.579352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-additional-scripts\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.579391 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run-ovn\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.579346 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.579739 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run-ovn\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.580211 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-additional-scripts\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.582179 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-scripts\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.602670 4886 scope.go:117] "RemoveContainer" containerID="cbbe07486135ddfe120920c1f4f9ccadece896cbebac702a4fee9f0d2022f4db" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.609228 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7nwl\" (UniqueName: \"kubernetes.io/projected/e489f203-c94a-4bbb-b22a-750bec963d77-kube-api-access-j7nwl\") pod \"ovn-controller-b7d9p-config-fbd7w\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:29 crc kubenswrapper[4886]: I0129 17:04:29.718112 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.278688 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f0b5-account-create-update-8b8vz"] Jan 29 17:04:30 crc kubenswrapper[4886]: W0129 17:04:30.282199 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29921ec8_f68f_4547_a2c0_d4d3f5de6960.slice/crio-e3585e24c6e310ab66cc3acdb8b7196a729aef835b23a64db0aa1d39659b162c WatchSource:0}: Error finding container e3585e24c6e310ab66cc3acdb8b7196a729aef835b23a64db0aa1d39659b162c: Status 404 returned error can't find the container with id e3585e24c6e310ab66cc3acdb8b7196a729aef835b23a64db0aa1d39659b162c Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.375809 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v692m"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.396714 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v692m"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.524006 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d860-account-create-update-5kd66"] Jan 29 17:04:30 crc kubenswrapper[4886]: W0129 17:04:30.528751 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c4e1c71_a857_4feb_8778_ba3aa8b7dbfe.slice/crio-b50b1c67e2972d88bd8981e1a3db87ee14511c02cd94a92c47a372ec32761177 WatchSource:0}: Error finding container b50b1c67e2972d88bd8981e1a3db87ee14511c02cd94a92c47a372ec32761177: Status 404 returned error can't find the container with id b50b1c67e2972d88bd8981e1a3db87ee14511c02cd94a92c47a372ec32761177 Jan 29 17:04:30 crc kubenswrapper[4886]: W0129 17:04:30.534098 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa302a57_5c6b_41b1_ac4b_7d9095b7b65a.slice/crio-75581e1d16d26560497cc9988813329216f56a92bcacbc7cddb3b31eef34be95 WatchSource:0}: Error finding container 75581e1d16d26560497cc9988813329216f56a92bcacbc7cddb3b31eef34be95: Status 404 returned error can't find the container with id 75581e1d16d26560497cc9988813329216f56a92bcacbc7cddb3b31eef34be95 Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.534710 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mdvpb"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.542062 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-00e3-account-create-update-5hhsj"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.643244 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a29ba47-9a94-492f-8abd-c01b04d0b3c1" path="/var/lib/kubelet/pods/7a29ba47-9a94-492f-8abd-c01b04d0b3c1/volumes" Jan 29 17:04:30 crc kubenswrapper[4886]: W0129 17:04:30.722200 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bcdded9_ad2a_4fcc_82f1_0a13cf85b06d.slice/crio-01b4206a66380781bc1d5bf890de4dd2a4c91be01985eaaaf4ae95a14ceba772 WatchSource:0}: Error finding container 01b4206a66380781bc1d5bf890de4dd2a4c91be01985eaaaf4ae95a14ceba772: Status 404 returned error can't find the container with id 01b4206a66380781bc1d5bf890de4dd2a4c91be01985eaaaf4ae95a14ceba772 Jan 29 17:04:30 crc kubenswrapper[4886]: W0129 17:04:30.726807 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode489f203_c94a_4bbb_b22a_750bec963d77.slice/crio-3494f9c79f1c1ef413b78a2d49593156e0435e82f4c6ab83f28f950673f2985c WatchSource:0}: Error finding container 3494f9c79f1c1ef413b78a2d49593156e0435e82f4c6ab83f28f950673f2985c: Status 404 returned error can't find the container with id 3494f9c79f1c1ef413b78a2d49593156e0435e82f4c6ab83f28f950673f2985c Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.726875 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7d9p-config-fbd7w"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.739782 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4vq4n"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.749822 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-sgspp"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.848547 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fw887"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.849835 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.871184 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fw887"] Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.914711 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4xhg\" (UniqueName: \"kubernetes.io/projected/6479af73-81ef-4755-89b5-3a2dd44e99b3-kube-api-access-m4xhg\") pod \"mysqld-exporter-openstack-db-create-fw887\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.914843 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6479af73-81ef-4755-89b5-3a2dd44e99b3-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fw887\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:30 crc kubenswrapper[4886]: I0129 17:04:30.917796 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-29gw9" podUID="4ef7b166-c078-4530-b05b-ae3e44088122" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: i/o timeout" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.015788 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4xhg\" (UniqueName: \"kubernetes.io/projected/6479af73-81ef-4755-89b5-3a2dd44e99b3-kube-api-access-m4xhg\") pod \"mysqld-exporter-openstack-db-create-fw887\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.015906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6479af73-81ef-4755-89b5-3a2dd44e99b3-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fw887\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.016616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6479af73-81ef-4755-89b5-3a2dd44e99b3-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fw887\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.038987 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4xhg\" (UniqueName: \"kubernetes.io/projected/6479af73-81ef-4755-89b5-3a2dd44e99b3-kube-api-access-m4xhg\") pod \"mysqld-exporter-openstack-db-create-fw887\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.208287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d860-account-create-update-5kd66" event={"ID":"66c16915-30cc-4a4f-81ff-4b82cf152968","Type":"ContainerStarted","Data":"4b1a89009d472fe5b2dceb7b8a0b8294983468e34c2707bffbc7bce6c3368172"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.209493 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-00e3-account-create-update-5hhsj" event={"ID":"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a","Type":"ContainerStarted","Data":"75581e1d16d26560497cc9988813329216f56a92bcacbc7cddb3b31eef34be95"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.210874 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f0b5-account-create-update-8b8vz" event={"ID":"29921ec8-f68f-4547-a2c0-d4d3f5de6960","Type":"ContainerStarted","Data":"e3585e24c6e310ab66cc3acdb8b7196a729aef835b23a64db0aa1d39659b162c"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.231940 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4vq4n" event={"ID":"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d","Type":"ContainerStarted","Data":"01b4206a66380781bc1d5bf890de4dd2a4c91be01985eaaaf4ae95a14ceba772"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.233722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7d9p-config-fbd7w" event={"ID":"e489f203-c94a-4bbb-b22a-750bec963d77","Type":"ContainerStarted","Data":"3494f9c79f1c1ef413b78a2d49593156e0435e82f4c6ab83f28f950673f2985c"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.234815 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mdvpb" event={"ID":"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe","Type":"ContainerStarted","Data":"b50b1c67e2972d88bd8981e1a3db87ee14511c02cd94a92c47a372ec32761177"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.236017 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b0be43b-8956-45aa-ad50-de9183b3fea3","Type":"ContainerStarted","Data":"215a0a427916185913ef03f036755684e9f8fb11bc8d8ec6645e74d9b4d6fab0"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.236725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sgspp" event={"ID":"b696cd6b-840b-4505-9010-114d223a90e9","Type":"ContainerStarted","Data":"1e72a81ebd6c0cbcca3631d9164e1b3194deb99d97abb1a18f67baa27d377916"} Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.283435 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.528436 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-5ab6-account-create-update-4xrnn"] Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.530076 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.533505 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.553770 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-5ab6-account-create-update-4xrnn"] Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.629184 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c996a30-f53d-49f1-a7d1-2ca23704b48e-operator-scripts\") pod \"mysqld-exporter-5ab6-account-create-update-4xrnn\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.629252 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6pj\" (UniqueName: \"kubernetes.io/projected/7c996a30-f53d-49f1-a7d1-2ca23704b48e-kube-api-access-7n6pj\") pod \"mysqld-exporter-5ab6-account-create-update-4xrnn\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.731783 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6pj\" (UniqueName: \"kubernetes.io/projected/7c996a30-f53d-49f1-a7d1-2ca23704b48e-kube-api-access-7n6pj\") pod \"mysqld-exporter-5ab6-account-create-update-4xrnn\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.732065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c996a30-f53d-49f1-a7d1-2ca23704b48e-operator-scripts\") pod \"mysqld-exporter-5ab6-account-create-update-4xrnn\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.732909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c996a30-f53d-49f1-a7d1-2ca23704b48e-operator-scripts\") pod \"mysqld-exporter-5ab6-account-create-update-4xrnn\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.751753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6pj\" (UniqueName: \"kubernetes.io/projected/7c996a30-f53d-49f1-a7d1-2ca23704b48e-kube-api-access-7n6pj\") pod \"mysqld-exporter-5ab6-account-create-update-4xrnn\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.853792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:31 crc kubenswrapper[4886]: I0129 17:04:31.861452 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fw887"] Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.198175 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ff68z"] Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.229007 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.240995 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.266817 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f0b5-account-create-update-8b8vz" event={"ID":"29921ec8-f68f-4547-a2c0-d4d3f5de6960","Type":"ContainerStarted","Data":"bb6b6c4443538f6a82366349284b39cf96fcba5ff7da991fc88f83ec4dbea3cd"} Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.269437 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9d0db9ae-746b-419a-bc61-bf85645d2bff","Type":"ContainerStarted","Data":"d1dc3fb46e158387bf0f32779951559ae37a47a019dba0a8cc0c029c48708606"} Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.270655 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ff68z"] Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.271309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"842bfe4d-04ba-4143-9076-3033163c7b82","Type":"ContainerStarted","Data":"08d69a0d8dd87ebbab66b41851c9555c89b1c9518edbf660dc3fb4f99c870c1b"} Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.274702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10","Type":"ContainerStarted","Data":"0f7d7bba0e7f3ae79ef50440ef9e40b86880917e00b15ccefc3f045f4186b63e"} Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.279787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fw887" event={"ID":"6479af73-81ef-4755-89b5-3a2dd44e99b3","Type":"ContainerStarted","Data":"467dace8916b0217ae148ecca1b8485085023c2a93c1b1258e47bf9de86c975f"} Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.281885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d860-account-create-update-5kd66" event={"ID":"66c16915-30cc-4a4f-81ff-4b82cf152968","Type":"ContainerStarted","Data":"dae301d02f31a6be0962a543705953e6d92f427e7aa9bc8443d7688a4f7705a4"} Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.314464 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-5ab6-account-create-update-4xrnn"] Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.345088 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8nt\" (UniqueName: \"kubernetes.io/projected/9b69834e-55cc-4ec2-b451-fafe1f417c53-kube-api-access-qc8nt\") pod \"root-account-create-update-ff68z\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.345198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69834e-55cc-4ec2-b451-fafe1f417c53-operator-scripts\") pod \"root-account-create-update-ff68z\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.447477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc8nt\" (UniqueName: \"kubernetes.io/projected/9b69834e-55cc-4ec2-b451-fafe1f417c53-kube-api-access-qc8nt\") pod \"root-account-create-update-ff68z\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.447595 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69834e-55cc-4ec2-b451-fafe1f417c53-operator-scripts\") pod \"root-account-create-update-ff68z\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.448531 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69834e-55cc-4ec2-b451-fafe1f417c53-operator-scripts\") pod \"root-account-create-update-ff68z\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.468265 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc8nt\" (UniqueName: \"kubernetes.io/projected/9b69834e-55cc-4ec2-b451-fafe1f417c53-kube-api-access-qc8nt\") pod \"root-account-create-update-ff68z\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:32 crc kubenswrapper[4886]: I0129 17:04:32.759309 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.296130 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ff68z"] Jan 29 17:04:33 crc kubenswrapper[4886]: W0129 17:04:33.297671 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b69834e_55cc_4ec2_b451_fafe1f417c53.slice/crio-0e84b35431f435c10da1a1d55797c5bcb58d9704217c007ee48b93dde2741c31 WatchSource:0}: Error finding container 0e84b35431f435c10da1a1d55797c5bcb58d9704217c007ee48b93dde2741c31: Status 404 returned error can't find the container with id 0e84b35431f435c10da1a1d55797c5bcb58d9704217c007ee48b93dde2741c31 Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.298344 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7d9p-config-fbd7w" event={"ID":"e489f203-c94a-4bbb-b22a-750bec963d77","Type":"ContainerStarted","Data":"6412eac490b1fbd3d0b00a59dd461a3eb98d94b486a8096aadd0a5be64624a01"} Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.299719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mdvpb" event={"ID":"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe","Type":"ContainerStarted","Data":"cbbd4f5360c0e0e269db9be0e3b0c9d872ff0fa28897b05c76dba7a51c4b1e4c"} Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.300877 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" event={"ID":"7c996a30-f53d-49f1-a7d1-2ca23704b48e","Type":"ContainerStarted","Data":"02ae7964e4db04590375f8dc8b2d4e000ef65dea8116644a045a8c2fec3c1786"} Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.302947 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sgspp" event={"ID":"b696cd6b-840b-4505-9010-114d223a90e9","Type":"ContainerStarted","Data":"11300dda6841f3bcadbf8fc0b293c71f220072872935dad2eeec46ba483d2773"} Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.305855 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-00e3-account-create-update-5hhsj" event={"ID":"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a","Type":"ContainerStarted","Data":"20030a467bab27996b15106f17b7491349b629c6d6de493fc3b1efb1f226e72c"} Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.305896 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 17:04:33 crc kubenswrapper[4886]: I0129 17:04:33.334837 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.28895009 podStartE2EDuration="2m10.3348213s" podCreationTimestamp="2026-01-29 17:02:23 +0000 UTC" firstStartedPulling="2026-01-29 17:02:26.222949197 +0000 UTC m=+2429.131668479" lastFinishedPulling="2026-01-29 17:03:49.268820417 +0000 UTC m=+2512.177539689" observedRunningTime="2026-01-29 17:04:33.331216501 +0000 UTC m=+2556.239935773" watchObservedRunningTime="2026-01-29 17:04:33.3348213 +0000 UTC m=+2556.243540572" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.043973 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-b7d9p" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.318309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ff68z" event={"ID":"9b69834e-55cc-4ec2-b451-fafe1f417c53","Type":"ContainerStarted","Data":"0e84b35431f435c10da1a1d55797c5bcb58d9704217c007ee48b93dde2741c31"} Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.320839 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4vq4n" event={"ID":"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d","Type":"ContainerStarted","Data":"fbecb6255a3f2d33607adb71963134e7eb4f057014a12ad026702a5429304db4"} Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.321140 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.321381 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.352345 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371906.502464 podStartE2EDuration="2m10.352311364s" podCreationTimestamp="2026-01-29 17:02:24 +0000 UTC" firstStartedPulling="2026-01-29 17:02:26.89804805 +0000 UTC m=+2429.806767322" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:34.340278513 +0000 UTC m=+2557.248997795" watchObservedRunningTime="2026-01-29 17:04:34.352311364 +0000 UTC m=+2557.261030636" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.365815 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=49.452227729 podStartE2EDuration="2m11.365793055s" podCreationTimestamp="2026-01-29 17:02:23 +0000 UTC" firstStartedPulling="2026-01-29 17:02:26.484539421 +0000 UTC m=+2429.393258693" lastFinishedPulling="2026-01-29 17:03:48.398104747 +0000 UTC m=+2511.306824019" observedRunningTime="2026-01-29 17:04:34.360710876 +0000 UTC m=+2557.269430168" watchObservedRunningTime="2026-01-29 17:04:34.365793055 +0000 UTC m=+2557.274512327" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.408752 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f0b5-account-create-update-8b8vz" podStartSLOduration=10.408732388 podStartE2EDuration="10.408732388s" podCreationTimestamp="2026-01-29 17:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:34.385251981 +0000 UTC m=+2557.293971263" watchObservedRunningTime="2026-01-29 17:04:34.408732388 +0000 UTC m=+2557.317451650" Jan 29 17:04:34 crc kubenswrapper[4886]: I0129 17:04:34.410690 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=49.181223366 podStartE2EDuration="2m11.410682052s" podCreationTimestamp="2026-01-29 17:02:23 +0000 UTC" firstStartedPulling="2026-01-29 17:02:26.168206379 +0000 UTC m=+2429.076925651" lastFinishedPulling="2026-01-29 17:03:48.397665065 +0000 UTC m=+2511.306384337" observedRunningTime="2026-01-29 17:04:34.401849369 +0000 UTC m=+2557.310568661" watchObservedRunningTime="2026-01-29 17:04:34.410682052 +0000 UTC m=+2557.319401334" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.331355 4886 generic.go:334] "Generic (PLEG): container finished" podID="e489f203-c94a-4bbb-b22a-750bec963d77" containerID="6412eac490b1fbd3d0b00a59dd461a3eb98d94b486a8096aadd0a5be64624a01" exitCode=0 Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.331421 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7d9p-config-fbd7w" event={"ID":"e489f203-c94a-4bbb-b22a-750bec963d77","Type":"ContainerDied","Data":"6412eac490b1fbd3d0b00a59dd461a3eb98d94b486a8096aadd0a5be64624a01"} Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.333627 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" event={"ID":"7c996a30-f53d-49f1-a7d1-2ca23704b48e","Type":"ContainerStarted","Data":"5019558a9253bbef2f27d289d48dcc75d2b0f7a1469d88aa8fb186da0d61df99"} Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.335284 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fw887" event={"ID":"6479af73-81ef-4755-89b5-3a2dd44e99b3","Type":"ContainerStarted","Data":"0341a2566f1bb6385e4ca19bd7599e154fd2818c69290a143a8dae194ef6f346"} Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.337102 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerStarted","Data":"36870feb46aff15218a1df0a6e9d4aa854998ebadaa74a5a50b3e39905ffbc8c"} Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.339213 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ff68z" event={"ID":"9b69834e-55cc-4ec2-b451-fafe1f417c53","Type":"ContainerStarted","Data":"6e26b828a472fc3b1df8fa1fda19373a058c84b6a577b9a6475d17f33176e5c8"} Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.389746 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d860-account-create-update-5kd66" podStartSLOduration=7.389723446 podStartE2EDuration="7.389723446s" podCreationTimestamp="2026-01-29 17:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.379786533 +0000 UTC m=+2558.288505805" watchObservedRunningTime="2026-01-29 17:04:35.389723446 +0000 UTC m=+2558.298442718" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.411946 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" podStartSLOduration=4.411921427 podStartE2EDuration="4.411921427s" podCreationTimestamp="2026-01-29 17:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.399100754 +0000 UTC m=+2558.307820036" watchObservedRunningTime="2026-01-29 17:04:35.411921427 +0000 UTC m=+2558.320640699" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.426128 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-00e3-account-create-update-5hhsj" podStartSLOduration=7.426110388 podStartE2EDuration="7.426110388s" podCreationTimestamp="2026-01-29 17:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.420671948 +0000 UTC m=+2558.329391230" watchObservedRunningTime="2026-01-29 17:04:35.426110388 +0000 UTC m=+2558.334829660" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.444015 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.445745 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-4vq4n" podStartSLOduration=7.445722118 podStartE2EDuration="7.445722118s" podCreationTimestamp="2026-01-29 17:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.435172038 +0000 UTC m=+2558.343891320" watchObservedRunningTime="2026-01-29 17:04:35.445722118 +0000 UTC m=+2558.354441390" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.466639 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ff68z" podStartSLOduration=3.466618794 podStartE2EDuration="3.466618794s" podCreationTimestamp="2026-01-29 17:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.449681987 +0000 UTC m=+2558.358401249" watchObservedRunningTime="2026-01-29 17:04:35.466618794 +0000 UTC m=+2558.375338066" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.471367 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-mdvpb" podStartSLOduration=11.471354024 podStartE2EDuration="11.471354024s" podCreationTimestamp="2026-01-29 17:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.462555742 +0000 UTC m=+2558.371275004" watchObservedRunningTime="2026-01-29 17:04:35.471354024 +0000 UTC m=+2558.380073296" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.486912 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-fw887" podStartSLOduration=5.486893702 podStartE2EDuration="5.486893702s" podCreationTimestamp="2026-01-29 17:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.473297578 +0000 UTC m=+2558.382016860" watchObservedRunningTime="2026-01-29 17:04:35.486893702 +0000 UTC m=+2558.395612974" Jan 29 17:04:35 crc kubenswrapper[4886]: I0129 17:04:35.494373 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-sgspp" podStartSLOduration=7.494358908 podStartE2EDuration="7.494358908s" podCreationTimestamp="2026-01-29 17:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:04:35.485908845 +0000 UTC m=+2558.394628137" watchObservedRunningTime="2026-01-29 17:04:35.494358908 +0000 UTC m=+2558.403078180" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.012173 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.125902 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run\") pod \"e489f203-c94a-4bbb-b22a-750bec963d77\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126012 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-additional-scripts\") pod \"e489f203-c94a-4bbb-b22a-750bec963d77\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126039 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-log-ovn\") pod \"e489f203-c94a-4bbb-b22a-750bec963d77\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126111 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-scripts\") pod \"e489f203-c94a-4bbb-b22a-750bec963d77\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126103 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run" (OuterVolumeSpecName: "var-run") pod "e489f203-c94a-4bbb-b22a-750bec963d77" (UID: "e489f203-c94a-4bbb-b22a-750bec963d77"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126149 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run-ovn\") pod \"e489f203-c94a-4bbb-b22a-750bec963d77\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126175 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e489f203-c94a-4bbb-b22a-750bec963d77" (UID: "e489f203-c94a-4bbb-b22a-750bec963d77"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e489f203-c94a-4bbb-b22a-750bec963d77" (UID: "e489f203-c94a-4bbb-b22a-750bec963d77"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126381 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7nwl\" (UniqueName: \"kubernetes.io/projected/e489f203-c94a-4bbb-b22a-750bec963d77-kube-api-access-j7nwl\") pod \"e489f203-c94a-4bbb-b22a-750bec963d77\" (UID: \"e489f203-c94a-4bbb-b22a-750bec963d77\") " Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.126740 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e489f203-c94a-4bbb-b22a-750bec963d77" (UID: "e489f203-c94a-4bbb-b22a-750bec963d77"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.127075 4886 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.127096 4886 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.127108 4886 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.127117 4886 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e489f203-c94a-4bbb-b22a-750bec963d77-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.127073 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-scripts" (OuterVolumeSpecName: "scripts") pod "e489f203-c94a-4bbb-b22a-750bec963d77" (UID: "e489f203-c94a-4bbb-b22a-750bec963d77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.139699 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e489f203-c94a-4bbb-b22a-750bec963d77-kube-api-access-j7nwl" (OuterVolumeSpecName: "kube-api-access-j7nwl") pod "e489f203-c94a-4bbb-b22a-750bec963d77" (UID: "e489f203-c94a-4bbb-b22a-750bec963d77"). InnerVolumeSpecName "kube-api-access-j7nwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.229344 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7nwl\" (UniqueName: \"kubernetes.io/projected/e489f203-c94a-4bbb-b22a-750bec963d77-kube-api-access-j7nwl\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.229375 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e489f203-c94a-4bbb-b22a-750bec963d77-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.440378 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7d9p-config-fbd7w" event={"ID":"e489f203-c94a-4bbb-b22a-750bec963d77","Type":"ContainerDied","Data":"3494f9c79f1c1ef413b78a2d49593156e0435e82f4c6ab83f28f950673f2985c"} Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.440422 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3494f9c79f1c1ef413b78a2d49593156e0435e82f4c6ab83f28f950673f2985c" Jan 29 17:04:40 crc kubenswrapper[4886]: I0129 17:04:40.440431 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7d9p-config-fbd7w" Jan 29 17:04:41 crc kubenswrapper[4886]: I0129 17:04:41.111031 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b7d9p-config-fbd7w"] Jan 29 17:04:41 crc kubenswrapper[4886]: I0129 17:04:41.121303 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b7d9p-config-fbd7w"] Jan 29 17:04:42 crc kubenswrapper[4886]: I0129 17:04:42.615033 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:04:42 crc kubenswrapper[4886]: E0129 17:04:42.615585 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:04:42 crc kubenswrapper[4886]: I0129 17:04:42.658181 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e489f203-c94a-4bbb-b22a-750bec963d77" path="/var/lib/kubelet/pods/e489f203-c94a-4bbb-b22a-750bec963d77/volumes" Jan 29 17:04:44 crc kubenswrapper[4886]: I0129 17:04:44.310337 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:04:44 crc kubenswrapper[4886]: E0129 17:04:44.310582 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 17:04:44 crc kubenswrapper[4886]: E0129 17:04:44.310707 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 17:04:44 crc kubenswrapper[4886]: E0129 17:04:44.310759 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift podName:6e2f2c6c-bc32-4a32-ba2c-8954d277ce47 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:16.310743841 +0000 UTC m=+2599.219463113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift") pod "swift-storage-0" (UID: "6e2f2c6c-bc32-4a32-ba2c-8954d277ce47") : configmap "swift-ring-files" not found Jan 29 17:04:45 crc kubenswrapper[4886]: I0129 17:04:45.299984 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2b0be43b-8956-45aa-ad50-de9183b3fea3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.144:5671: connect: connection refused" Jan 29 17:04:45 crc kubenswrapper[4886]: I0129 17:04:45.452986 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.146:5671: connect: connection refused" Jan 29 17:04:45 crc kubenswrapper[4886]: I0129 17:04:45.639138 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="842bfe4d-04ba-4143-9076-3033163c7b82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.145:5671: connect: connection refused" Jan 29 17:04:45 crc kubenswrapper[4886]: I0129 17:04:45.968898 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9d0db9ae-746b-419a-bc61-bf85645d2bff" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.147:5671: connect: connection refused" Jan 29 17:04:48 crc kubenswrapper[4886]: I0129 17:04:48.512123 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7294" event={"ID":"ebccb3a0-d421-4c30-9201-43e9106e4006","Type":"ContainerStarted","Data":"b9499d28202d4957e50821e930ae2c95870e6ae3730a64237a2f9f54f953765c"} Jan 29 17:04:49 crc kubenswrapper[4886]: E0129 17:04:49.072386 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Jan 29 17:04:49 crc kubenswrapper[4886]: E0129 17:04:49.072730 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(ce7955a1-eb58-425a-872a-7ec102b8e090): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 17:04:49 crc kubenswrapper[4886]: E0129 17:04:49.073921 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\", failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" Jan 29 17:04:49 crc kubenswrapper[4886]: I0129 17:04:49.521889 4886 generic.go:334] "Generic (PLEG): container finished" podID="9b69834e-55cc-4ec2-b451-fafe1f417c53" containerID="6e26b828a472fc3b1df8fa1fda19373a058c84b6a577b9a6475d17f33176e5c8" exitCode=0 Jan 29 17:04:49 crc kubenswrapper[4886]: I0129 17:04:49.521992 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ff68z" event={"ID":"9b69834e-55cc-4ec2-b451-fafe1f417c53","Type":"ContainerDied","Data":"6e26b828a472fc3b1df8fa1fda19373a058c84b6a577b9a6475d17f33176e5c8"} Jan 29 17:04:49 crc kubenswrapper[4886]: I0129 17:04:49.525072 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:04:49 crc kubenswrapper[4886]: I0129 17:04:49.576807 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-s7294" podStartSLOduration=9.865146722 podStartE2EDuration="37.576782039s" podCreationTimestamp="2026-01-29 17:04:12 +0000 UTC" firstStartedPulling="2026-01-29 17:04:13.842912832 +0000 UTC m=+2536.751632104" lastFinishedPulling="2026-01-29 17:04:41.554548149 +0000 UTC m=+2564.463267421" observedRunningTime="2026-01-29 17:04:49.568216793 +0000 UTC m=+2572.476936075" watchObservedRunningTime="2026-01-29 17:04:49.576782039 +0000 UTC m=+2572.485501311" Jan 29 17:04:50 crc kubenswrapper[4886]: I0129 17:04:50.963681 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.070963 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc8nt\" (UniqueName: \"kubernetes.io/projected/9b69834e-55cc-4ec2-b451-fafe1f417c53-kube-api-access-qc8nt\") pod \"9b69834e-55cc-4ec2-b451-fafe1f417c53\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.071121 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69834e-55cc-4ec2-b451-fafe1f417c53-operator-scripts\") pod \"9b69834e-55cc-4ec2-b451-fafe1f417c53\" (UID: \"9b69834e-55cc-4ec2-b451-fafe1f417c53\") " Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.071971 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b69834e-55cc-4ec2-b451-fafe1f417c53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b69834e-55cc-4ec2-b451-fafe1f417c53" (UID: "9b69834e-55cc-4ec2-b451-fafe1f417c53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.077608 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b69834e-55cc-4ec2-b451-fafe1f417c53-kube-api-access-qc8nt" (OuterVolumeSpecName: "kube-api-access-qc8nt") pod "9b69834e-55cc-4ec2-b451-fafe1f417c53" (UID: "9b69834e-55cc-4ec2-b451-fafe1f417c53"). InnerVolumeSpecName "kube-api-access-qc8nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.173893 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69834e-55cc-4ec2-b451-fafe1f417c53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.173930 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc8nt\" (UniqueName: \"kubernetes.io/projected/9b69834e-55cc-4ec2-b451-fafe1f417c53-kube-api-access-qc8nt\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.542582 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ff68z" event={"ID":"9b69834e-55cc-4ec2-b451-fafe1f417c53","Type":"ContainerDied","Data":"0e84b35431f435c10da1a1d55797c5bcb58d9704217c007ee48b93dde2741c31"} Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.542639 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e84b35431f435c10da1a1d55797c5bcb58d9704217c007ee48b93dde2741c31" Jan 29 17:04:51 crc kubenswrapper[4886]: I0129 17:04:51.542708 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ff68z" Jan 29 17:04:53 crc kubenswrapper[4886]: I0129 17:04:53.615106 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:04:53 crc kubenswrapper[4886]: E0129 17:04:53.615948 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:04:54 crc kubenswrapper[4886]: E0129 17:04:54.481176 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.593869 4886 generic.go:334] "Generic (PLEG): container finished" podID="6479af73-81ef-4755-89b5-3a2dd44e99b3" containerID="0341a2566f1bb6385e4ca19bd7599e154fd2818c69290a143a8dae194ef6f346" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.593942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fw887" event={"ID":"6479af73-81ef-4755-89b5-3a2dd44e99b3","Type":"ContainerDied","Data":"0341a2566f1bb6385e4ca19bd7599e154fd2818c69290a143a8dae194ef6f346"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.595670 4886 generic.go:334] "Generic (PLEG): container finished" podID="66c16915-30cc-4a4f-81ff-4b82cf152968" containerID="dae301d02f31a6be0962a543705953e6d92f427e7aa9bc8443d7688a4f7705a4" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.595722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d860-account-create-update-5kd66" event={"ID":"66c16915-30cc-4a4f-81ff-4b82cf152968","Type":"ContainerDied","Data":"dae301d02f31a6be0962a543705953e6d92f427e7aa9bc8443d7688a4f7705a4"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.600989 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" containerID="20030a467bab27996b15106f17b7491349b629c6d6de493fc3b1efb1f226e72c" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.601052 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-00e3-account-create-update-5hhsj" event={"ID":"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a","Type":"ContainerDied","Data":"20030a467bab27996b15106f17b7491349b629c6d6de493fc3b1efb1f226e72c"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.604030 4886 generic.go:334] "Generic (PLEG): container finished" podID="29921ec8-f68f-4547-a2c0-d4d3f5de6960" containerID="bb6b6c4443538f6a82366349284b39cf96fcba5ff7da991fc88f83ec4dbea3cd" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.604087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f0b5-account-create-update-8b8vz" event={"ID":"29921ec8-f68f-4547-a2c0-d4d3f5de6960","Type":"ContainerDied","Data":"bb6b6c4443538f6a82366349284b39cf96fcba5ff7da991fc88f83ec4dbea3cd"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.607905 4886 generic.go:334] "Generic (PLEG): container finished" podID="6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" containerID="fbecb6255a3f2d33607adb71963134e7eb4f057014a12ad026702a5429304db4" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.607992 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4vq4n" event={"ID":"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d","Type":"ContainerDied","Data":"fbecb6255a3f2d33607adb71963134e7eb4f057014a12ad026702a5429304db4"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.609778 4886 generic.go:334] "Generic (PLEG): container finished" podID="9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" containerID="cbbd4f5360c0e0e269db9be0e3b0c9d872ff0fa28897b05c76dba7a51c4b1e4c" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.609832 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mdvpb" event={"ID":"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe","Type":"ContainerDied","Data":"cbbd4f5360c0e0e269db9be0e3b0c9d872ff0fa28897b05c76dba7a51c4b1e4c"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.612288 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerStarted","Data":"3a9c53d5227fb7b0c6bf2e7197762b1a4d147cab6dde0f951e7924a558b5e58d"} Jan 29 17:04:54 crc kubenswrapper[4886]: E0129 17:04:54.614665 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.621671 4886 generic.go:334] "Generic (PLEG): container finished" podID="7c996a30-f53d-49f1-a7d1-2ca23704b48e" containerID="5019558a9253bbef2f27d289d48dcc75d2b0f7a1469d88aa8fb186da0d61df99" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.623179 4886 generic.go:334] "Generic (PLEG): container finished" podID="b696cd6b-840b-4505-9010-114d223a90e9" containerID="11300dda6841f3bcadbf8fc0b293c71f220072872935dad2eeec46ba483d2773" exitCode=0 Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.627303 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" event={"ID":"7c996a30-f53d-49f1-a7d1-2ca23704b48e","Type":"ContainerDied","Data":"5019558a9253bbef2f27d289d48dcc75d2b0f7a1469d88aa8fb186da0d61df99"} Jan 29 17:04:54 crc kubenswrapper[4886]: I0129 17:04:54.630835 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sgspp" event={"ID":"b696cd6b-840b-4505-9010-114d223a90e9","Type":"ContainerDied","Data":"11300dda6841f3bcadbf8fc0b293c71f220072872935dad2eeec46ba483d2773"} Jan 29 17:04:55 crc kubenswrapper[4886]: I0129 17:04:55.297494 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2b0be43b-8956-45aa-ad50-de9183b3fea3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.144:5671: connect: connection refused" Jan 29 17:04:55 crc kubenswrapper[4886]: I0129 17:04:55.429690 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ff68z"] Jan 29 17:04:55 crc kubenswrapper[4886]: I0129 17:04:55.436975 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ff68z"] Jan 29 17:04:55 crc kubenswrapper[4886]: I0129 17:04:55.445049 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.146:5671: connect: connection refused" Jan 29 17:04:55 crc kubenswrapper[4886]: I0129 17:04:55.640352 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="842bfe4d-04ba-4143-9076-3033163c7b82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.145:5671: connect: connection refused" Jan 29 17:04:55 crc kubenswrapper[4886]: I0129 17:04:55.966201 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9d0db9ae-746b-419a-bc61-bf85645d2bff" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.147:5671: connect: connection refused" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.523999 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.529132 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.541679 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.550113 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.556663 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.568221 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.585039 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.591730 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.593686 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n6pj\" (UniqueName: \"kubernetes.io/projected/7c996a30-f53d-49f1-a7d1-2ca23704b48e-kube-api-access-7n6pj\") pod \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.593740 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29921ec8-f68f-4547-a2c0-d4d3f5de6960-operator-scripts\") pod \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.593790 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hc79\" (UniqueName: \"kubernetes.io/projected/b696cd6b-840b-4505-9010-114d223a90e9-kube-api-access-8hc79\") pod \"b696cd6b-840b-4505-9010-114d223a90e9\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.593828 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-operator-scripts\") pod \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.593904 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m47b2\" (UniqueName: \"kubernetes.io/projected/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-kube-api-access-m47b2\") pod \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\" (UID: \"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.593940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxbwc\" (UniqueName: \"kubernetes.io/projected/29921ec8-f68f-4547-a2c0-d4d3f5de6960-kube-api-access-pxbwc\") pod \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\" (UID: \"29921ec8-f68f-4547-a2c0-d4d3f5de6960\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.594037 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b696cd6b-840b-4505-9010-114d223a90e9-operator-scripts\") pod \"b696cd6b-840b-4505-9010-114d223a90e9\" (UID: \"b696cd6b-840b-4505-9010-114d223a90e9\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.594071 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c996a30-f53d-49f1-a7d1-2ca23704b48e-operator-scripts\") pod \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\" (UID: \"7c996a30-f53d-49f1-a7d1-2ca23704b48e\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.594313 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" (UID: "aa302a57-5c6b-41b1-ac4b-7d9095b7b65a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.594686 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.594901 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b696cd6b-840b-4505-9010-114d223a90e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b696cd6b-840b-4505-9010-114d223a90e9" (UID: "b696cd6b-840b-4505-9010-114d223a90e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.594926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c996a30-f53d-49f1-a7d1-2ca23704b48e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c996a30-f53d-49f1-a7d1-2ca23704b48e" (UID: "7c996a30-f53d-49f1-a7d1-2ca23704b48e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.595128 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29921ec8-f68f-4547-a2c0-d4d3f5de6960-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29921ec8-f68f-4547-a2c0-d4d3f5de6960" (UID: "29921ec8-f68f-4547-a2c0-d4d3f5de6960"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.608101 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29921ec8-f68f-4547-a2c0-d4d3f5de6960-kube-api-access-pxbwc" (OuterVolumeSpecName: "kube-api-access-pxbwc") pod "29921ec8-f68f-4547-a2c0-d4d3f5de6960" (UID: "29921ec8-f68f-4547-a2c0-d4d3f5de6960"). InnerVolumeSpecName "kube-api-access-pxbwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.608188 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-kube-api-access-m47b2" (OuterVolumeSpecName: "kube-api-access-m47b2") pod "aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" (UID: "aa302a57-5c6b-41b1-ac4b-7d9095b7b65a"). InnerVolumeSpecName "kube-api-access-m47b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.626828 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b696cd6b-840b-4505-9010-114d223a90e9-kube-api-access-8hc79" (OuterVolumeSpecName: "kube-api-access-8hc79") pod "b696cd6b-840b-4505-9010-114d223a90e9" (UID: "b696cd6b-840b-4505-9010-114d223a90e9"). InnerVolumeSpecName "kube-api-access-8hc79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.627006 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c996a30-f53d-49f1-a7d1-2ca23704b48e-kube-api-access-7n6pj" (OuterVolumeSpecName: "kube-api-access-7n6pj") pod "7c996a30-f53d-49f1-a7d1-2ca23704b48e" (UID: "7c996a30-f53d-49f1-a7d1-2ca23704b48e"). InnerVolumeSpecName "kube-api-access-7n6pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.664476 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d860-account-create-update-5kd66" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.681164 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4vq4n" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.692181 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b69834e-55cc-4ec2-b451-fafe1f417c53" path="/var/lib/kubelet/pods/9b69834e-55cc-4ec2-b451-fafe1f417c53/volumes" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.696346 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4xhg\" (UniqueName: \"kubernetes.io/projected/6479af73-81ef-4755-89b5-3a2dd44e99b3-kube-api-access-m4xhg\") pod \"6479af73-81ef-4755-89b5-3a2dd44e99b3\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.696451 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzhjq\" (UniqueName: \"kubernetes.io/projected/66c16915-30cc-4a4f-81ff-4b82cf152968-kube-api-access-lzhjq\") pod \"66c16915-30cc-4a4f-81ff-4b82cf152968\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.696488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8547\" (UniqueName: \"kubernetes.io/projected/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-kube-api-access-n8547\") pod \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.696571 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-operator-scripts\") pod \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\" (UID: \"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.696693 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-operator-scripts\") pod \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.696782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c16915-30cc-4a4f-81ff-4b82cf152968-operator-scripts\") pod \"66c16915-30cc-4a4f-81ff-4b82cf152968\" (UID: \"66c16915-30cc-4a4f-81ff-4b82cf152968\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.698499 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2mjv\" (UniqueName: \"kubernetes.io/projected/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-kube-api-access-s2mjv\") pod \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\" (UID: \"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.698591 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6479af73-81ef-4755-89b5-3a2dd44e99b3-operator-scripts\") pod \"6479af73-81ef-4755-89b5-3a2dd44e99b3\" (UID: \"6479af73-81ef-4755-89b5-3a2dd44e99b3\") " Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.699374 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" (UID: "6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.699589 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6479af73-81ef-4755-89b5-3a2dd44e99b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6479af73-81ef-4755-89b5-3a2dd44e99b3" (UID: "6479af73-81ef-4755-89b5-3a2dd44e99b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.699963 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" (UID: "9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.704814 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c16915-30cc-4a4f-81ff-4b82cf152968-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66c16915-30cc-4a4f-81ff-4b82cf152968" (UID: "66c16915-30cc-4a4f-81ff-4b82cf152968"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.708773 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mdvpb" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.710234 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6479af73-81ef-4755-89b5-3a2dd44e99b3-kube-api-access-m4xhg" (OuterVolumeSpecName: "kube-api-access-m4xhg") pod "6479af73-81ef-4755-89b5-3a2dd44e99b3" (UID: "6479af73-81ef-4755-89b5-3a2dd44e99b3"). InnerVolumeSpecName "kube-api-access-m4xhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.710647 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-kube-api-access-s2mjv" (OuterVolumeSpecName: "kube-api-access-s2mjv") pod "9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" (UID: "9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe"). InnerVolumeSpecName "kube-api-access-s2mjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.711081 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-kube-api-access-n8547" (OuterVolumeSpecName: "kube-api-access-n8547") pod "6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" (UID: "6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d"). InnerVolumeSpecName "kube-api-access-n8547". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712022 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712344 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hc79\" (UniqueName: \"kubernetes.io/projected/b696cd6b-840b-4505-9010-114d223a90e9-kube-api-access-8hc79\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712445 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4xhg\" (UniqueName: \"kubernetes.io/projected/6479af73-81ef-4755-89b5-3a2dd44e99b3-kube-api-access-m4xhg\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712522 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m47b2\" (UniqueName: \"kubernetes.io/projected/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a-kube-api-access-m47b2\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712594 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8547\" (UniqueName: \"kubernetes.io/projected/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-kube-api-access-n8547\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712699 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxbwc\" (UniqueName: \"kubernetes.io/projected/29921ec8-f68f-4547-a2c0-d4d3f5de6960-kube-api-access-pxbwc\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712731 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712748 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712761 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b696cd6b-840b-4505-9010-114d223a90e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712790 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c996a30-f53d-49f1-a7d1-2ca23704b48e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712803 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c16915-30cc-4a4f-81ff-4b82cf152968-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712820 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2mjv\" (UniqueName: \"kubernetes.io/projected/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe-kube-api-access-s2mjv\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712833 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n6pj\" (UniqueName: \"kubernetes.io/projected/7c996a30-f53d-49f1-a7d1-2ca23704b48e-kube-api-access-7n6pj\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712846 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29921ec8-f68f-4547-a2c0-d4d3f5de6960-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.712862 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6479af73-81ef-4755-89b5-3a2dd44e99b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.718615 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c16915-30cc-4a4f-81ff-4b82cf152968-kube-api-access-lzhjq" (OuterVolumeSpecName: "kube-api-access-lzhjq") pod "66c16915-30cc-4a4f-81ff-4b82cf152968" (UID: "66c16915-30cc-4a4f-81ff-4b82cf152968"). InnerVolumeSpecName "kube-api-access-lzhjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.719208 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sgspp" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.720794 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fw887" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727097 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-00e3-account-create-update-5hhsj" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d860-account-create-update-5kd66" event={"ID":"66c16915-30cc-4a4f-81ff-4b82cf152968","Type":"ContainerDied","Data":"4b1a89009d472fe5b2dceb7b8a0b8294983468e34c2707bffbc7bce6c3368172"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727895 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1a89009d472fe5b2dceb7b8a0b8294983468e34c2707bffbc7bce6c3368172" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727918 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4vq4n" event={"ID":"6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d","Type":"ContainerDied","Data":"01b4206a66380781bc1d5bf890de4dd2a4c91be01985eaaaf4ae95a14ceba772"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727930 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b4206a66380781bc1d5bf890de4dd2a4c91be01985eaaaf4ae95a14ceba772" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727940 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mdvpb" event={"ID":"9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe","Type":"ContainerDied","Data":"b50b1c67e2972d88bd8981e1a3db87ee14511c02cd94a92c47a372ec32761177"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727951 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50b1c67e2972d88bd8981e1a3db87ee14511c02cd94a92c47a372ec32761177" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727960 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5ab6-account-create-update-4xrnn" event={"ID":"7c996a30-f53d-49f1-a7d1-2ca23704b48e","Type":"ContainerDied","Data":"02ae7964e4db04590375f8dc8b2d4e000ef65dea8116644a045a8c2fec3c1786"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727972 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ae7964e4db04590375f8dc8b2d4e000ef65dea8116644a045a8c2fec3c1786" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727980 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sgspp" event={"ID":"b696cd6b-840b-4505-9010-114d223a90e9","Type":"ContainerDied","Data":"1e72a81ebd6c0cbcca3631d9164e1b3194deb99d97abb1a18f67baa27d377916"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727990 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e72a81ebd6c0cbcca3631d9164e1b3194deb99d97abb1a18f67baa27d377916" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.727999 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fw887" event={"ID":"6479af73-81ef-4755-89b5-3a2dd44e99b3","Type":"ContainerDied","Data":"467dace8916b0217ae148ecca1b8485085023c2a93c1b1258e47bf9de86c975f"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.728011 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="467dace8916b0217ae148ecca1b8485085023c2a93c1b1258e47bf9de86c975f" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.728021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-00e3-account-create-update-5hhsj" event={"ID":"aa302a57-5c6b-41b1-ac4b-7d9095b7b65a","Type":"ContainerDied","Data":"75581e1d16d26560497cc9988813329216f56a92bcacbc7cddb3b31eef34be95"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.728031 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75581e1d16d26560497cc9988813329216f56a92bcacbc7cddb3b31eef34be95" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.731577 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f0b5-account-create-update-8b8vz" event={"ID":"29921ec8-f68f-4547-a2c0-d4d3f5de6960","Type":"ContainerDied","Data":"e3585e24c6e310ab66cc3acdb8b7196a729aef835b23a64db0aa1d39659b162c"} Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.731615 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3585e24c6e310ab66cc3acdb8b7196a729aef835b23a64db0aa1d39659b162c" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.731689 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f0b5-account-create-update-8b8vz" Jan 29 17:04:56 crc kubenswrapper[4886]: I0129 17:04:56.815370 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzhjq\" (UniqueName: \"kubernetes.io/projected/66c16915-30cc-4a4f-81ff-4b82cf152968-kube-api-access-lzhjq\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:57 crc kubenswrapper[4886]: I0129 17:04:57.463571 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 29 17:04:57 crc kubenswrapper[4886]: E0129 17:04:57.466805 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.725262 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-thqn5"] Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.727494 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.727602 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.727677 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b696cd6b-840b-4505-9010-114d223a90e9" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.727781 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b696cd6b-840b-4505-9010-114d223a90e9" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.727867 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b69834e-55cc-4ec2-b451-fafe1f417c53" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.727938 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b69834e-55cc-4ec2-b451-fafe1f417c53" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.728014 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6479af73-81ef-4755-89b5-3a2dd44e99b3" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.729479 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6479af73-81ef-4755-89b5-3a2dd44e99b3" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.729582 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29921ec8-f68f-4547-a2c0-d4d3f5de6960" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.729652 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="29921ec8-f68f-4547-a2c0-d4d3f5de6960" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.729715 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c16915-30cc-4a4f-81ff-4b82cf152968" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.729763 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c16915-30cc-4a4f-81ff-4b82cf152968" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.729827 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c996a30-f53d-49f1-a7d1-2ca23704b48e" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.729882 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c996a30-f53d-49f1-a7d1-2ca23704b48e" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.729932 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.729979 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.730034 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730086 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: E0129 17:04:59.730144 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e489f203-c94a-4bbb-b22a-750bec963d77" containerName="ovn-config" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730192 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e489f203-c94a-4bbb-b22a-750bec963d77" containerName="ovn-config" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730461 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c16915-30cc-4a4f-81ff-4b82cf152968" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730528 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730607 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e489f203-c94a-4bbb-b22a-750bec963d77" containerName="ovn-config" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730677 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730743 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b696cd6b-840b-4505-9010-114d223a90e9" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.730828 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.731090 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6479af73-81ef-4755-89b5-3a2dd44e99b3" containerName="mariadb-database-create" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.731164 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c996a30-f53d-49f1-a7d1-2ca23704b48e" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.731239 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="29921ec8-f68f-4547-a2c0-d4d3f5de6960" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.731311 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b69834e-55cc-4ec2-b451-fafe1f417c53" containerName="mariadb-account-create-update" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.732194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.745087 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-thqn5"] Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.745801 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.746121 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cpfdg" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.786666 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c7r8\" (UniqueName: \"kubernetes.io/projected/9f114908-5594-4378-939f-f54b2157d676-kube-api-access-6c7r8\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.786813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-combined-ca-bundle\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.786913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-config-data\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.787018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-db-sync-config-data\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.889167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c7r8\" (UniqueName: \"kubernetes.io/projected/9f114908-5594-4378-939f-f54b2157d676-kube-api-access-6c7r8\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.889258 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-combined-ca-bundle\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.889388 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-config-data\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.889459 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-db-sync-config-data\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.894859 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-db-sync-config-data\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.895383 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-config-data\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.903720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-combined-ca-bundle\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:04:59 crc kubenswrapper[4886]: I0129 17:04:59.915951 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c7r8\" (UniqueName: \"kubernetes.io/projected/9f114908-5594-4378-939f-f54b2157d676-kube-api-access-6c7r8\") pod \"glance-db-sync-thqn5\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " pod="openstack/glance-db-sync-thqn5" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.049567 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-thqn5" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.471576 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xg8wq"] Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.474793 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.477870 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.489378 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xg8wq"] Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.608182 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b94c98-0561-4135-a5af-023ef5f4ad67-operator-scripts\") pod \"root-account-create-update-xg8wq\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.608435 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvff\" (UniqueName: \"kubernetes.io/projected/40b94c98-0561-4135-a5af-023ef5f4ad67-kube-api-access-hbvff\") pod \"root-account-create-update-xg8wq\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.710667 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvff\" (UniqueName: \"kubernetes.io/projected/40b94c98-0561-4135-a5af-023ef5f4ad67-kube-api-access-hbvff\") pod \"root-account-create-update-xg8wq\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.710914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b94c98-0561-4135-a5af-023ef5f4ad67-operator-scripts\") pod \"root-account-create-update-xg8wq\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.711893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b94c98-0561-4135-a5af-023ef5f4ad67-operator-scripts\") pod \"root-account-create-update-xg8wq\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.736801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvff\" (UniqueName: \"kubernetes.io/projected/40b94c98-0561-4135-a5af-023ef5f4ad67-kube-api-access-hbvff\") pod \"root-account-create-update-xg8wq\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.739395 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-thqn5"] Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.769021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-thqn5" event={"ID":"9f114908-5594-4378-939f-f54b2157d676","Type":"ContainerStarted","Data":"fcc8bbf40553cde9c2b386443b55115feca44b41f5cbd715334aa7b1506eef78"} Jan 29 17:05:00 crc kubenswrapper[4886]: I0129 17:05:00.793525 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.377401 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xg8wq"] Jan 29 17:05:01 crc kubenswrapper[4886]: W0129 17:05:01.378180 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40b94c98_0561_4135_a5af_023ef5f4ad67.slice/crio-7681989d41c3df63a9cfe16c457a7c04de933a5c485b9a6a131f7473a305fd74 WatchSource:0}: Error finding container 7681989d41c3df63a9cfe16c457a7c04de933a5c485b9a6a131f7473a305fd74: Status 404 returned error can't find the container with id 7681989d41c3df63a9cfe16c457a7c04de933a5c485b9a6a131f7473a305fd74 Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.784411 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg8wq" event={"ID":"40b94c98-0561-4135-a5af-023ef5f4ad67","Type":"ContainerStarted","Data":"2e89a5a701ca89a4fedcbc0c8d956d6d340377591f80cf75f3cdedc6fb2cd6f3"} Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.784462 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg8wq" event={"ID":"40b94c98-0561-4135-a5af-023ef5f4ad67","Type":"ContainerStarted","Data":"7681989d41c3df63a9cfe16c457a7c04de933a5c485b9a6a131f7473a305fd74"} Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.788270 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4"] Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.790028 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.811679 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4"] Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.824436 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xg8wq" podStartSLOduration=1.824408045 podStartE2EDuration="1.824408045s" podCreationTimestamp="2026-01-29 17:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:01.806108441 +0000 UTC m=+2584.714827713" watchObservedRunningTime="2026-01-29 17:05:01.824408045 +0000 UTC m=+2584.733127317" Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.945837 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-sl5h4\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.945928 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gng45\" (UniqueName: \"kubernetes.io/projected/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-kube-api-access-gng45\") pod \"mysqld-exporter-openstack-cell1-db-create-sl5h4\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.995030 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-23ad-account-create-update-2dsmj"] Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.996614 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:01 crc kubenswrapper[4886]: I0129 17:05:01.999132 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.005748 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-23ad-account-create-update-2dsmj"] Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.047905 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-sl5h4\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.047974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gng45\" (UniqueName: \"kubernetes.io/projected/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-kube-api-access-gng45\") pod \"mysqld-exporter-openstack-cell1-db-create-sl5h4\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.051652 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-sl5h4\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.067019 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gng45\" (UniqueName: \"kubernetes.io/projected/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-kube-api-access-gng45\") pod \"mysqld-exporter-openstack-cell1-db-create-sl5h4\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.109233 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.151097 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ed1f90-1318-483e-901c-bff80e1e94b6-operator-scripts\") pod \"mysqld-exporter-23ad-account-create-update-2dsmj\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.151437 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b24b5\" (UniqueName: \"kubernetes.io/projected/d2ed1f90-1318-483e-901c-bff80e1e94b6-kube-api-access-b24b5\") pod \"mysqld-exporter-23ad-account-create-update-2dsmj\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.253422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b24b5\" (UniqueName: \"kubernetes.io/projected/d2ed1f90-1318-483e-901c-bff80e1e94b6-kube-api-access-b24b5\") pod \"mysqld-exporter-23ad-account-create-update-2dsmj\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.253894 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ed1f90-1318-483e-901c-bff80e1e94b6-operator-scripts\") pod \"mysqld-exporter-23ad-account-create-update-2dsmj\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.254711 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ed1f90-1318-483e-901c-bff80e1e94b6-operator-scripts\") pod \"mysqld-exporter-23ad-account-create-update-2dsmj\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.277376 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b24b5\" (UniqueName: \"kubernetes.io/projected/d2ed1f90-1318-483e-901c-bff80e1e94b6-kube-api-access-b24b5\") pod \"mysqld-exporter-23ad-account-create-update-2dsmj\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.316253 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.465318 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.473164 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.637747 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4"] Jan 29 17:05:02 crc kubenswrapper[4886]: W0129 17:05:02.654382 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8a69a79_4e4c_4815_8cf5_0864ff2b8026.slice/crio-27e604caf8b15942348375a9990e1bf7c1fa6aa35968cf73fc93abd1ac9c4cad WatchSource:0}: Error finding container 27e604caf8b15942348375a9990e1bf7c1fa6aa35968cf73fc93abd1ac9c4cad: Status 404 returned error can't find the container with id 27e604caf8b15942348375a9990e1bf7c1fa6aa35968cf73fc93abd1ac9c4cad Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.829206 4886 generic.go:334] "Generic (PLEG): container finished" podID="40b94c98-0561-4135-a5af-023ef5f4ad67" containerID="2e89a5a701ca89a4fedcbc0c8d956d6d340377591f80cf75f3cdedc6fb2cd6f3" exitCode=0 Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.829709 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg8wq" event={"ID":"40b94c98-0561-4135-a5af-023ef5f4ad67","Type":"ContainerDied","Data":"2e89a5a701ca89a4fedcbc0c8d956d6d340377591f80cf75f3cdedc6fb2cd6f3"} Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.837641 4886 generic.go:334] "Generic (PLEG): container finished" podID="ebccb3a0-d421-4c30-9201-43e9106e4006" containerID="b9499d28202d4957e50821e930ae2c95870e6ae3730a64237a2f9f54f953765c" exitCode=0 Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.837755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7294" event={"ID":"ebccb3a0-d421-4c30-9201-43e9106e4006","Type":"ContainerDied","Data":"b9499d28202d4957e50821e930ae2c95870e6ae3730a64237a2f9f54f953765c"} Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.839833 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" event={"ID":"d8a69a79-4e4c-4815-8cf5-0864ff2b8026","Type":"ContainerStarted","Data":"27e604caf8b15942348375a9990e1bf7c1fa6aa35968cf73fc93abd1ac9c4cad"} Jan 29 17:05:02 crc kubenswrapper[4886]: I0129 17:05:02.932582 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-23ad-account-create-update-2dsmj"] Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.851724 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8a69a79-4e4c-4815-8cf5-0864ff2b8026" containerID="ef7ef7e1c633f815512fbc83adaa9bb46d23ddf73eb8c93c02d1c3c3b64a5fcf" exitCode=0 Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.851772 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" event={"ID":"d8a69a79-4e4c-4815-8cf5-0864ff2b8026","Type":"ContainerDied","Data":"ef7ef7e1c633f815512fbc83adaa9bb46d23ddf73eb8c93c02d1c3c3b64a5fcf"} Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.857149 4886 generic.go:334] "Generic (PLEG): container finished" podID="d2ed1f90-1318-483e-901c-bff80e1e94b6" containerID="d34996a936f771ac75eec769fb4795e0b3637c5867ba052c3b34c2c7b2aee667" exitCode=0 Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.857243 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" event={"ID":"d2ed1f90-1318-483e-901c-bff80e1e94b6","Type":"ContainerDied","Data":"d34996a936f771ac75eec769fb4795e0b3637c5867ba052c3b34c2c7b2aee667"} Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.857297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" event={"ID":"d2ed1f90-1318-483e-901c-bff80e1e94b6","Type":"ContainerStarted","Data":"563f68afde711b3cca93a3f5d5dbae0e6aee5931cf6f7c5cb99463997cce21b1"} Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.861908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerStarted","Data":"29b6600206cc1bb7f3f16719ec90e5544c72d2eaf5a596eaa0dcf19be615c898"} Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.862882 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:03 crc kubenswrapper[4886]: I0129 17:05:03.921583 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=63.471724184 podStartE2EDuration="2m32.921560895s" podCreationTimestamp="2026-01-29 17:02:31 +0000 UTC" firstStartedPulling="2026-01-29 17:03:34.193704497 +0000 UTC m=+2497.102423769" lastFinishedPulling="2026-01-29 17:05:03.643541208 +0000 UTC m=+2586.552260480" observedRunningTime="2026-01-29 17:05:03.920964979 +0000 UTC m=+2586.829684251" watchObservedRunningTime="2026-01-29 17:05:03.921560895 +0000 UTC m=+2586.830280177" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.405236 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.524967 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-dispersionconf\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.525008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-swiftconf\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.525050 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km9gr\" (UniqueName: \"kubernetes.io/projected/ebccb3a0-d421-4c30-9201-43e9106e4006-kube-api-access-km9gr\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.525097 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-ring-data-devices\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.525637 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.526309 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-scripts\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.526358 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-combined-ca-bundle\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.526431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ebccb3a0-d421-4c30-9201-43e9106e4006-etc-swift\") pod \"ebccb3a0-d421-4c30-9201-43e9106e4006\" (UID: \"ebccb3a0-d421-4c30-9201-43e9106e4006\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.527108 4886 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.528030 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebccb3a0-d421-4c30-9201-43e9106e4006-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.533746 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebccb3a0-d421-4c30-9201-43e9106e4006-kube-api-access-km9gr" (OuterVolumeSpecName: "kube-api-access-km9gr") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "kube-api-access-km9gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.536847 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.552243 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-scripts" (OuterVolumeSpecName: "scripts") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.563502 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.575454 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebccb3a0-d421-4c30-9201-43e9106e4006" (UID: "ebccb3a0-d421-4c30-9201-43e9106e4006"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.593849 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.628942 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebccb3a0-d421-4c30-9201-43e9106e4006-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.628981 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.629030 4886 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ebccb3a0-d421-4c30-9201-43e9106e4006-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.629046 4886 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.629058 4886 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ebccb3a0-d421-4c30-9201-43e9106e4006-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.629071 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km9gr\" (UniqueName: \"kubernetes.io/projected/ebccb3a0-d421-4c30-9201-43e9106e4006-kube-api-access-km9gr\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.730800 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvff\" (UniqueName: \"kubernetes.io/projected/40b94c98-0561-4135-a5af-023ef5f4ad67-kube-api-access-hbvff\") pod \"40b94c98-0561-4135-a5af-023ef5f4ad67\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.730954 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b94c98-0561-4135-a5af-023ef5f4ad67-operator-scripts\") pod \"40b94c98-0561-4135-a5af-023ef5f4ad67\" (UID: \"40b94c98-0561-4135-a5af-023ef5f4ad67\") " Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.733151 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b94c98-0561-4135-a5af-023ef5f4ad67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40b94c98-0561-4135-a5af-023ef5f4ad67" (UID: "40b94c98-0561-4135-a5af-023ef5f4ad67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.735200 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b94c98-0561-4135-a5af-023ef5f4ad67-kube-api-access-hbvff" (OuterVolumeSpecName: "kube-api-access-hbvff") pod "40b94c98-0561-4135-a5af-023ef5f4ad67" (UID: "40b94c98-0561-4135-a5af-023ef5f4ad67"). InnerVolumeSpecName "kube-api-access-hbvff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.834152 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvff\" (UniqueName: \"kubernetes.io/projected/40b94c98-0561-4135-a5af-023ef5f4ad67-kube-api-access-hbvff\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.834198 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b94c98-0561-4135-a5af-023ef5f4ad67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.876828 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xg8wq" event={"ID":"40b94c98-0561-4135-a5af-023ef5f4ad67","Type":"ContainerDied","Data":"7681989d41c3df63a9cfe16c457a7c04de933a5c485b9a6a131f7473a305fd74"} Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.876867 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7681989d41c3df63a9cfe16c457a7c04de933a5c485b9a6a131f7473a305fd74" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.876918 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xg8wq" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.884693 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7294" event={"ID":"ebccb3a0-d421-4c30-9201-43e9106e4006","Type":"ContainerDied","Data":"b1f9445ba0ed2622eaf729acf0f6efe1278fbfe9cc96bab1babb0686d7460824"} Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.884749 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1f9445ba0ed2622eaf729acf0f6efe1278fbfe9cc96bab1babb0686d7460824" Jan 29 17:05:04 crc kubenswrapper[4886]: I0129 17:05:04.884934 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7294" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.248563 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.297597 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2b0be43b-8956-45aa-ad50-de9183b3fea3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.144:5671: connect: connection refused" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.356789 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-operator-scripts\") pod \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.356998 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gng45\" (UniqueName: \"kubernetes.io/projected/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-kube-api-access-gng45\") pod \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\" (UID: \"d8a69a79-4e4c-4815-8cf5-0864ff2b8026\") " Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.367226 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8a69a79-4e4c-4815-8cf5-0864ff2b8026" (UID: "d8a69a79-4e4c-4815-8cf5-0864ff2b8026"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.369268 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-kube-api-access-gng45" (OuterVolumeSpecName: "kube-api-access-gng45") pod "d8a69a79-4e4c-4815-8cf5-0864ff2b8026" (UID: "d8a69a79-4e4c-4815-8cf5-0864ff2b8026"). InnerVolumeSpecName "kube-api-access-gng45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.432846 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.444409 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.146:5671: connect: connection refused" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.470077 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.470145 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gng45\" (UniqueName: \"kubernetes.io/projected/d8a69a79-4e4c-4815-8cf5-0864ff2b8026-kube-api-access-gng45\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.571184 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b24b5\" (UniqueName: \"kubernetes.io/projected/d2ed1f90-1318-483e-901c-bff80e1e94b6-kube-api-access-b24b5\") pod \"d2ed1f90-1318-483e-901c-bff80e1e94b6\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.571473 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ed1f90-1318-483e-901c-bff80e1e94b6-operator-scripts\") pod \"d2ed1f90-1318-483e-901c-bff80e1e94b6\" (UID: \"d2ed1f90-1318-483e-901c-bff80e1e94b6\") " Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.571898 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ed1f90-1318-483e-901c-bff80e1e94b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2ed1f90-1318-483e-901c-bff80e1e94b6" (UID: "d2ed1f90-1318-483e-901c-bff80e1e94b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.576852 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ed1f90-1318-483e-901c-bff80e1e94b6-kube-api-access-b24b5" (OuterVolumeSpecName: "kube-api-access-b24b5") pod "d2ed1f90-1318-483e-901c-bff80e1e94b6" (UID: "d2ed1f90-1318-483e-901c-bff80e1e94b6"). InnerVolumeSpecName "kube-api-access-b24b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.637421 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="842bfe4d-04ba-4143-9076-3033163c7b82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.145:5671: connect: connection refused" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.674316 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ed1f90-1318-483e-901c-bff80e1e94b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.674366 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b24b5\" (UniqueName: \"kubernetes.io/projected/d2ed1f90-1318-483e-901c-bff80e1e94b6-kube-api-access-b24b5\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.896263 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" event={"ID":"d8a69a79-4e4c-4815-8cf5-0864ff2b8026","Type":"ContainerDied","Data":"27e604caf8b15942348375a9990e1bf7c1fa6aa35968cf73fc93abd1ac9c4cad"} Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.896349 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27e604caf8b15942348375a9990e1bf7c1fa6aa35968cf73fc93abd1ac9c4cad" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.896367 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.900987 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.902731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-23ad-account-create-update-2dsmj" event={"ID":"d2ed1f90-1318-483e-901c-bff80e1e94b6","Type":"ContainerDied","Data":"563f68afde711b3cca93a3f5d5dbae0e6aee5931cf6f7c5cb99463997cce21b1"} Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.902855 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563f68afde711b3cca93a3f5d5dbae0e6aee5931cf6f7c5cb99463997cce21b1" Jan 29 17:05:05 crc kubenswrapper[4886]: I0129 17:05:05.970773 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:05:06 crc kubenswrapper[4886]: I0129 17:05:06.395519 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:05:06 crc kubenswrapper[4886]: I0129 17:05:06.910209 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="config-reloader" containerID="cri-o://36870feb46aff15218a1df0a6e9d4aa854998ebadaa74a5a50b3e39905ffbc8c" gracePeriod=600 Jan 29 17:05:06 crc kubenswrapper[4886]: I0129 17:05:06.910384 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="prometheus" containerID="cri-o://3a9c53d5227fb7b0c6bf2e7197762b1a4d147cab6dde0f951e7924a558b5e58d" gracePeriod=600 Jan 29 17:05:06 crc kubenswrapper[4886]: I0129 17:05:06.910385 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="thanos-sidecar" containerID="cri-o://29b6600206cc1bb7f3f16719ec90e5544c72d2eaf5a596eaa0dcf19be615c898" gracePeriod=600 Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.304960 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:05:07 crc kubenswrapper[4886]: E0129 17:05:07.305483 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebccb3a0-d421-4c30-9201-43e9106e4006" containerName="swift-ring-rebalance" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305503 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebccb3a0-d421-4c30-9201-43e9106e4006" containerName="swift-ring-rebalance" Jan 29 17:05:07 crc kubenswrapper[4886]: E0129 17:05:07.305519 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a69a79-4e4c-4815-8cf5-0864ff2b8026" containerName="mariadb-database-create" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305525 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a69a79-4e4c-4815-8cf5-0864ff2b8026" containerName="mariadb-database-create" Jan 29 17:05:07 crc kubenswrapper[4886]: E0129 17:05:07.305547 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ed1f90-1318-483e-901c-bff80e1e94b6" containerName="mariadb-account-create-update" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305555 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ed1f90-1318-483e-901c-bff80e1e94b6" containerName="mariadb-account-create-update" Jan 29 17:05:07 crc kubenswrapper[4886]: E0129 17:05:07.305576 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b94c98-0561-4135-a5af-023ef5f4ad67" containerName="mariadb-account-create-update" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305582 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b94c98-0561-4135-a5af-023ef5f4ad67" containerName="mariadb-account-create-update" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305817 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebccb3a0-d421-4c30-9201-43e9106e4006" containerName="swift-ring-rebalance" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305838 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b94c98-0561-4135-a5af-023ef5f4ad67" containerName="mariadb-account-create-update" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305853 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a69a79-4e4c-4815-8cf5-0864ff2b8026" containerName="mariadb-database-create" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.305871 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ed1f90-1318-483e-901c-bff80e1e94b6" containerName="mariadb-account-create-update" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.306744 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.309936 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.317779 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.415727 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v52w\" (UniqueName: \"kubernetes.io/projected/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-kube-api-access-5v52w\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.416261 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.416535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-config-data\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.463530 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.154:9090/-/ready\": dial tcp 10.217.0.154:9090: connect: connection refused" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.518805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v52w\" (UniqueName: \"kubernetes.io/projected/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-kube-api-access-5v52w\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.518936 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.519023 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-config-data\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.526493 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-config-data\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.526511 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.537675 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v52w\" (UniqueName: \"kubernetes.io/projected/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-kube-api-access-5v52w\") pod \"mysqld-exporter-0\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.615850 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:05:07 crc kubenswrapper[4886]: E0129 17:05:07.616238 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.672203 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.926412 4886 generic.go:334] "Generic (PLEG): container finished" podID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerID="29b6600206cc1bb7f3f16719ec90e5544c72d2eaf5a596eaa0dcf19be615c898" exitCode=0 Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.926837 4886 generic.go:334] "Generic (PLEG): container finished" podID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerID="3a9c53d5227fb7b0c6bf2e7197762b1a4d147cab6dde0f951e7924a558b5e58d" exitCode=0 Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.926848 4886 generic.go:334] "Generic (PLEG): container finished" podID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerID="36870feb46aff15218a1df0a6e9d4aa854998ebadaa74a5a50b3e39905ffbc8c" exitCode=0 Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.926872 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerDied","Data":"29b6600206cc1bb7f3f16719ec90e5544c72d2eaf5a596eaa0dcf19be615c898"} Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.926921 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerDied","Data":"3a9c53d5227fb7b0c6bf2e7197762b1a4d147cab6dde0f951e7924a558b5e58d"} Jan 29 17:05:07 crc kubenswrapper[4886]: I0129 17:05:07.926938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerDied","Data":"36870feb46aff15218a1df0a6e9d4aa854998ebadaa74a5a50b3e39905ffbc8c"} Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.190239 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.662076 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.750739 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.750801 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ce7955a1-eb58-425a-872a-7ec102b8e090-config-out\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.750871 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-2\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.750904 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-0\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.750956 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-config\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.750985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-tls-assets\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.751000 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-1\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.751039 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-web-config\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.751189 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cnt\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-kube-api-access-w2cnt\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.751283 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-thanos-prometheus-http-client-file\") pod \"ce7955a1-eb58-425a-872a-7ec102b8e090\" (UID: \"ce7955a1-eb58-425a-872a-7ec102b8e090\") " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.752163 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.752175 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.753449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.756854 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7955a1-eb58-425a-872a-7ec102b8e090-config-out" (OuterVolumeSpecName: "config-out") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.757671 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.760273 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.774193 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-kube-api-access-w2cnt" (OuterVolumeSpecName: "kube-api-access-w2cnt") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "kube-api-access-w2cnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.797418 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-web-config" (OuterVolumeSpecName: "web-config") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.799129 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-config" (OuterVolumeSpecName: "config") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.805893 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "ce7955a1-eb58-425a-872a-7ec102b8e090" (UID: "ce7955a1-eb58-425a-872a-7ec102b8e090"). InnerVolumeSpecName "pvc-68e86941-9560-4703-a0e6-50bee25f62a0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853755 4886 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-web-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853787 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cnt\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-kube-api-access-w2cnt\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853799 4886 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853853 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") on node \"crc\" " Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853864 4886 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ce7955a1-eb58-425a-872a-7ec102b8e090-config-out\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853874 4886 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853884 4886 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853894 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce7955a1-eb58-425a-872a-7ec102b8e090-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853902 4886 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ce7955a1-eb58-425a-872a-7ec102b8e090-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.853910 4886 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ce7955a1-eb58-425a-872a-7ec102b8e090-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.880853 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.884068 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-68e86941-9560-4703-a0e6-50bee25f62a0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0") on node "crc" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.940771 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e","Type":"ContainerStarted","Data":"a4b442eb660a759ea9b06148625ca4e079373c7e47cea96d0478208100ae22a9"} Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.947841 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ce7955a1-eb58-425a-872a-7ec102b8e090","Type":"ContainerDied","Data":"38705f04f0f2e20b7f5d72009f437278994e72d7c6d255707ef36ddaf6f80953"} Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.947899 4886 scope.go:117] "RemoveContainer" containerID="29b6600206cc1bb7f3f16719ec90e5544c72d2eaf5a596eaa0dcf19be615c898" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.948062 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.964116 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:08 crc kubenswrapper[4886]: I0129 17:05:08.991081 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.015546 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.015810 4886 scope.go:117] "RemoveContainer" containerID="3a9c53d5227fb7b0c6bf2e7197762b1a4d147cab6dde0f951e7924a558b5e58d" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.059402 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:05:09 crc kubenswrapper[4886]: E0129 17:05:09.060017 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="thanos-sidecar" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060037 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="thanos-sidecar" Jan 29 17:05:09 crc kubenswrapper[4886]: E0129 17:05:09.060075 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="config-reloader" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060083 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="config-reloader" Jan 29 17:05:09 crc kubenswrapper[4886]: E0129 17:05:09.060169 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="init-config-reloader" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060180 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="init-config-reloader" Jan 29 17:05:09 crc kubenswrapper[4886]: E0129 17:05:09.060199 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="prometheus" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060206 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="prometheus" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060408 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="config-reloader" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060428 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="thanos-sidecar" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.060438 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" containerName="prometheus" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.062476 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.066985 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gbmnx" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.070606 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.071184 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.071351 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.071706 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.071828 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.071981 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.072077 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.072140 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.076672 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.086805 4886 scope.go:117] "RemoveContainer" containerID="36870feb46aff15218a1df0a6e9d4aa854998ebadaa74a5a50b3e39905ffbc8c" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.155717 4886 scope.go:117] "RemoveContainer" containerID="583c2c73cc1b55ad9f4f022652302dc10ae77e94e45a693b0865ff8b717978ab" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.168914 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169011 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169105 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169133 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169178 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169297 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169348 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169463 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24b4\" (UniqueName: \"kubernetes.io/projected/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-kube-api-access-m24b4\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169496 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169640 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.169732 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272207 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272260 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272354 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272393 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24b4\" (UniqueName: \"kubernetes.io/projected/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-kube-api-access-m24b4\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272472 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.272522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.273298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.276835 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.276843 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.277551 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.277667 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.277935 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.277956 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b5b0b1c62be5d324bfe10f676e08a70a611b72b2c99a9227275ea9ec17aa7e0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.279381 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.282402 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.283066 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.284696 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.287764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.290055 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24b4\" (UniqueName: \"kubernetes.io/projected/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-kube-api-access-m24b4\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.293565 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b3a2d6b-4eb5-44a2-837b-cfbe63f07107-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.315911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68e86941-9560-4703-a0e6-50bee25f62a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68e86941-9560-4703-a0e6-50bee25f62a0\") pod \"prometheus-metric-storage-0\" (UID: \"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107\") " pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:09 crc kubenswrapper[4886]: I0129 17:05:09.407423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:10 crc kubenswrapper[4886]: I0129 17:05:10.631853 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7955a1-eb58-425a-872a-7ec102b8e090" path="/var/lib/kubelet/pods/ce7955a1-eb58-425a-872a-7ec102b8e090/volumes" Jan 29 17:05:10 crc kubenswrapper[4886]: I0129 17:05:10.771174 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 17:05:15 crc kubenswrapper[4886]: I0129 17:05:15.298595 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2b0be43b-8956-45aa-ad50-de9183b3fea3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.144:5671: connect: connection refused" Jan 29 17:05:15 crc kubenswrapper[4886]: I0129 17:05:15.444870 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.146:5671: connect: connection refused" Jan 29 17:05:15 crc kubenswrapper[4886]: I0129 17:05:15.638085 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 29 17:05:16 crc kubenswrapper[4886]: I0129 17:05:16.329704 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:05:16 crc kubenswrapper[4886]: I0129 17:05:16.336361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e2f2c6c-bc32-4a32-ba2c-8954d277ce47-etc-swift\") pod \"swift-storage-0\" (UID: \"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47\") " pod="openstack/swift-storage-0" Jan 29 17:05:16 crc kubenswrapper[4886]: I0129 17:05:16.475426 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 17:05:17 crc kubenswrapper[4886]: I0129 17:05:17.041363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107","Type":"ContainerStarted","Data":"008507624c5e459bffcfe3745d6841d3a84f99cc269885679b7c9e83134281c5"} Jan 29 17:05:19 crc kubenswrapper[4886]: I0129 17:05:19.519721 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:05:19 crc kubenswrapper[4886]: W0129 17:05:19.524199 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e2f2c6c_bc32_4a32_ba2c_8954d277ce47.slice/crio-251f09491fb6f3211881c60459fd725a28b397882e9bf117072fb4445ff00e03 WatchSource:0}: Error finding container 251f09491fb6f3211881c60459fd725a28b397882e9bf117072fb4445ff00e03: Status 404 returned error can't find the container with id 251f09491fb6f3211881c60459fd725a28b397882e9bf117072fb4445ff00e03 Jan 29 17:05:20 crc kubenswrapper[4886]: I0129 17:05:20.076071 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-thqn5" event={"ID":"9f114908-5594-4378-939f-f54b2157d676","Type":"ContainerStarted","Data":"76e9fd9551f88713599d793f819bec47fc38185510d47fbd152e0939943ac037"} Jan 29 17:05:20 crc kubenswrapper[4886]: I0129 17:05:20.077796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e","Type":"ContainerStarted","Data":"2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66"} Jan 29 17:05:20 crc kubenswrapper[4886]: I0129 17:05:20.080282 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"251f09491fb6f3211881c60459fd725a28b397882e9bf117072fb4445ff00e03"} Jan 29 17:05:20 crc kubenswrapper[4886]: I0129 17:05:20.103604 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-thqn5" podStartSLOduration=2.888788242 podStartE2EDuration="21.103587655s" podCreationTimestamp="2026-01-29 17:04:59 +0000 UTC" firstStartedPulling="2026-01-29 17:05:00.742956749 +0000 UTC m=+2583.651676021" lastFinishedPulling="2026-01-29 17:05:18.957756122 +0000 UTC m=+2601.866475434" observedRunningTime="2026-01-29 17:05:20.095774455 +0000 UTC m=+2603.004493737" watchObservedRunningTime="2026-01-29 17:05:20.103587655 +0000 UTC m=+2603.012306927" Jan 29 17:05:20 crc kubenswrapper[4886]: I0129 17:05:20.124845 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.391488973 podStartE2EDuration="13.124807173s" podCreationTimestamp="2026-01-29 17:05:07 +0000 UTC" firstStartedPulling="2026-01-29 17:05:08.224426951 +0000 UTC m=+2591.133146223" lastFinishedPulling="2026-01-29 17:05:18.957745131 +0000 UTC m=+2601.866464423" observedRunningTime="2026-01-29 17:05:20.114821381 +0000 UTC m=+2603.023540663" watchObservedRunningTime="2026-01-29 17:05:20.124807173 +0000 UTC m=+2603.033526495" Jan 29 17:05:22 crc kubenswrapper[4886]: I0129 17:05:22.615166 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:05:22 crc kubenswrapper[4886]: E0129 17:05:22.615893 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:05:23 crc kubenswrapper[4886]: I0129 17:05:23.116684 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107","Type":"ContainerStarted","Data":"de7dada2ef19babe3f5199b8971a1952c603cdf7fc481479b9ab0e7054f6362b"} Jan 29 17:05:25 crc kubenswrapper[4886]: I0129 17:05:25.299648 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 17:05:25 crc kubenswrapper[4886]: I0129 17:05:25.444234 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.146:5671: connect: connection refused" Jan 29 17:05:26 crc kubenswrapper[4886]: I0129 17:05:26.166182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"91dfedcd84ac1fcfc4233d9c608ed66798ca8b2cc395de0a7cfa1a84b6ad0b93"} Jan 29 17:05:27 crc kubenswrapper[4886]: I0129 17:05:27.180433 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"210093098e70027ca0511a925eb7d3f4d788705183245a1fed785c07c0db8d0c"} Jan 29 17:05:28 crc kubenswrapper[4886]: I0129 17:05:28.198694 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"675fd086dcbf76788ef35301c27eec88d51392bbe6d2527c9b8247b18b6bedc8"} Jan 29 17:05:32 crc kubenswrapper[4886]: I0129 17:05:32.237297 4886 generic.go:334] "Generic (PLEG): container finished" podID="8b3a2d6b-4eb5-44a2-837b-cfbe63f07107" containerID="de7dada2ef19babe3f5199b8971a1952c603cdf7fc481479b9ab0e7054f6362b" exitCode=0 Jan 29 17:05:32 crc kubenswrapper[4886]: I0129 17:05:32.237363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107","Type":"ContainerDied","Data":"de7dada2ef19babe3f5199b8971a1952c603cdf7fc481479b9ab0e7054f6362b"} Jan 29 17:05:32 crc kubenswrapper[4886]: I0129 17:05:32.241363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"4bf82f389eecacebff2da62d86dc9ced9849a658b1bb5c3ad10e05ed2b182877"} Jan 29 17:05:33 crc kubenswrapper[4886]: I0129 17:05:33.266726 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107","Type":"ContainerStarted","Data":"04d25e6f1ec09cdc59b613cf64cd249765d99f850dfac14010aa9c2703547555"} Jan 29 17:05:34 crc kubenswrapper[4886]: I0129 17:05:34.615877 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:05:35 crc kubenswrapper[4886]: I0129 17:05:35.288905 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"db3893b2fd9096a13f5744612d4a2bcbba80c7ed2ddb6ffa1307348c351b1963"} Jan 29 17:05:35 crc kubenswrapper[4886]: I0129 17:05:35.448568 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 29 17:05:35 crc kubenswrapper[4886]: I0129 17:05:35.978760 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-b8qfq"] Jan 29 17:05:35 crc kubenswrapper[4886]: I0129 17:05:35.980487 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:35 crc kubenswrapper[4886]: I0129 17:05:35.987705 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-b8qfq"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.078119 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmtg\" (UniqueName: \"kubernetes.io/projected/219e979e-b3a8-42d0-8f23-737a86a2aefb-kube-api-access-qrmtg\") pod \"heat-db-create-b8qfq\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.078281 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/219e979e-b3a8-42d0-8f23-737a86a2aefb-operator-scripts\") pod \"heat-db-create-b8qfq\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.091019 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5m27f"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.092685 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.165198 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5m27f"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.180769 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmtg\" (UniqueName: \"kubernetes.io/projected/219e979e-b3a8-42d0-8f23-737a86a2aefb-kube-api-access-qrmtg\") pod \"heat-db-create-b8qfq\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.180954 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/219e979e-b3a8-42d0-8f23-737a86a2aefb-operator-scripts\") pod \"heat-db-create-b8qfq\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.181018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca25333-29b2-4c38-9e85-ebd2a0d593d6-operator-scripts\") pod \"cinder-db-create-5m27f\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.181071 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8zm4\" (UniqueName: \"kubernetes.io/projected/eca25333-29b2-4c38-9e85-ebd2a0d593d6-kube-api-access-c8zm4\") pod \"cinder-db-create-5m27f\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.182488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/219e979e-b3a8-42d0-8f23-737a86a2aefb-operator-scripts\") pod \"heat-db-create-b8qfq\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.211080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmtg\" (UniqueName: \"kubernetes.io/projected/219e979e-b3a8-42d0-8f23-737a86a2aefb-kube-api-access-qrmtg\") pod \"heat-db-create-b8qfq\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.243168 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-bd38-account-create-update-rgmr5"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.251626 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.261611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.268742 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-bd38-account-create-update-rgmr5"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.283670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca25333-29b2-4c38-9e85-ebd2a0d593d6-operator-scripts\") pod \"cinder-db-create-5m27f\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.283765 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8zm4\" (UniqueName: \"kubernetes.io/projected/eca25333-29b2-4c38-9e85-ebd2a0d593d6-kube-api-access-c8zm4\") pod \"cinder-db-create-5m27f\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.284549 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca25333-29b2-4c38-9e85-ebd2a0d593d6-operator-scripts\") pod \"cinder-db-create-5m27f\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.298412 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107","Type":"ContainerStarted","Data":"1f127eef1f75d009bfa88d892080ac8076b5b396ae14658ea85d8c93fccd374f"} Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.315929 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.325761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8zm4\" (UniqueName: \"kubernetes.io/projected/eca25333-29b2-4c38-9e85-ebd2a0d593d6-kube-api-access-c8zm4\") pod \"cinder-db-create-5m27f\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.384915 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vvrp4"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.386136 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-operator-scripts\") pod \"cinder-bd38-account-create-update-rgmr5\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.386234 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lv2x\" (UniqueName: \"kubernetes.io/projected/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-kube-api-access-6lv2x\") pod \"cinder-bd38-account-create-update-rgmr5\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.386287 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.398028 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-70c1-account-create-update-gwzzv"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.399644 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.402368 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.413402 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.450025 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vvrp4"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.466993 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-70c1-account-create-update-gwzzv"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhtgf\" (UniqueName: \"kubernetes.io/projected/2b3dc785-5f55-49ca-8678-5105ba7e0568-kube-api-access-lhtgf\") pod \"barbican-70c1-account-create-update-gwzzv\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488108 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-operator-scripts\") pod \"cinder-bd38-account-create-update-rgmr5\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488164 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61eedb40-ed14-42aa-9751-8bedcd699260-operator-scripts\") pod \"barbican-db-create-vvrp4\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc785-5f55-49ca-8678-5105ba7e0568-operator-scripts\") pod \"barbican-70c1-account-create-update-gwzzv\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488233 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lv2x\" (UniqueName: \"kubernetes.io/projected/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-kube-api-access-6lv2x\") pod \"cinder-bd38-account-create-update-rgmr5\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488261 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8fng\" (UniqueName: \"kubernetes.io/projected/61eedb40-ed14-42aa-9751-8bedcd699260-kube-api-access-r8fng\") pod \"barbican-db-create-vvrp4\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.488916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-operator-scripts\") pod \"cinder-bd38-account-create-update-rgmr5\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.504467 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8whvl"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.505801 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.511056 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.511508 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k5qcd" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.511583 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.511883 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.538052 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lv2x\" (UniqueName: \"kubernetes.io/projected/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-kube-api-access-6lv2x\") pod \"cinder-bd38-account-create-update-rgmr5\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.543401 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-4501-account-create-update-hj72z"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.545073 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.563160 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.563843 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8whvl"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.574359 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4501-account-create-update-hj72z"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.589822 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrkh\" (UniqueName: \"kubernetes.io/projected/6c9729b7-e21b-4509-b337-618094fb2d52-kube-api-access-gxrkh\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.590141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhtgf\" (UniqueName: \"kubernetes.io/projected/2b3dc785-5f55-49ca-8678-5105ba7e0568-kube-api-access-lhtgf\") pod \"barbican-70c1-account-create-update-gwzzv\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.590191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-combined-ca-bundle\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.590385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-config-data\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.590457 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61eedb40-ed14-42aa-9751-8bedcd699260-operator-scripts\") pod \"barbican-db-create-vvrp4\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.590491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc785-5f55-49ca-8678-5105ba7e0568-operator-scripts\") pod \"barbican-70c1-account-create-update-gwzzv\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.590666 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8fng\" (UniqueName: \"kubernetes.io/projected/61eedb40-ed14-42aa-9751-8bedcd699260-kube-api-access-r8fng\") pod \"barbican-db-create-vvrp4\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.598005 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61eedb40-ed14-42aa-9751-8bedcd699260-operator-scripts\") pod \"barbican-db-create-vvrp4\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.598963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc785-5f55-49ca-8678-5105ba7e0568-operator-scripts\") pod \"barbican-70c1-account-create-update-gwzzv\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.627803 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhtgf\" (UniqueName: \"kubernetes.io/projected/2b3dc785-5f55-49ca-8678-5105ba7e0568-kube-api-access-lhtgf\") pod \"barbican-70c1-account-create-update-gwzzv\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.663068 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8fng\" (UniqueName: \"kubernetes.io/projected/61eedb40-ed14-42aa-9751-8bedcd699260-kube-api-access-r8fng\") pod \"barbican-db-create-vvrp4\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.671537 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mj8rv"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.672722 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mj8rv"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.674746 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.687089 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.692428 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-config-data\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.692562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxrkh\" (UniqueName: \"kubernetes.io/projected/6c9729b7-e21b-4509-b337-618094fb2d52-kube-api-access-gxrkh\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.692605 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg89r\" (UniqueName: \"kubernetes.io/projected/95df3f15-8d1d-4baf-bbb6-df4939f0d201-kube-api-access-rg89r\") pod \"heat-4501-account-create-update-hj72z\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.692645 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-combined-ca-bundle\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.692684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df3f15-8d1d-4baf-bbb6-df4939f0d201-operator-scripts\") pod \"heat-4501-account-create-update-hj72z\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.722138 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.757206 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.776319 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-combined-ca-bundle\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.777310 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-config-data\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.782395 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxrkh\" (UniqueName: \"kubernetes.io/projected/6c9729b7-e21b-4509-b337-618094fb2d52-kube-api-access-gxrkh\") pod \"keystone-db-sync-8whvl\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.794670 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg89r\" (UniqueName: \"kubernetes.io/projected/95df3f15-8d1d-4baf-bbb6-df4939f0d201-kube-api-access-rg89r\") pod \"heat-4501-account-create-update-hj72z\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.794778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f34bb765-0998-45ea-bb61-9fbbc2c7359d-operator-scripts\") pod \"neutron-db-create-mj8rv\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.794864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df3f15-8d1d-4baf-bbb6-df4939f0d201-operator-scripts\") pod \"heat-4501-account-create-update-hj72z\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.794962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9f7t\" (UniqueName: \"kubernetes.io/projected/f34bb765-0998-45ea-bb61-9fbbc2c7359d-kube-api-access-r9f7t\") pod \"neutron-db-create-mj8rv\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.795866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df3f15-8d1d-4baf-bbb6-df4939f0d201-operator-scripts\") pod \"heat-4501-account-create-update-hj72z\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.805530 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e433-account-create-update-qm5sx"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.806874 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.810830 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.811210 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg89r\" (UniqueName: \"kubernetes.io/projected/95df3f15-8d1d-4baf-bbb6-df4939f0d201-kube-api-access-rg89r\") pod \"heat-4501-account-create-update-hj72z\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.827514 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e433-account-create-update-qm5sx"] Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.897501 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f34bb765-0998-45ea-bb61-9fbbc2c7359d-operator-scripts\") pod \"neutron-db-create-mj8rv\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.897550 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kv5p\" (UniqueName: \"kubernetes.io/projected/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-kube-api-access-7kv5p\") pod \"neutron-e433-account-create-update-qm5sx\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.897641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9f7t\" (UniqueName: \"kubernetes.io/projected/f34bb765-0998-45ea-bb61-9fbbc2c7359d-kube-api-access-r9f7t\") pod \"neutron-db-create-mj8rv\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.897663 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-operator-scripts\") pod \"neutron-e433-account-create-update-qm5sx\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.898604 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f34bb765-0998-45ea-bb61-9fbbc2c7359d-operator-scripts\") pod \"neutron-db-create-mj8rv\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.903693 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8whvl" Jan 29 17:05:36 crc kubenswrapper[4886]: I0129 17:05:36.918226 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9f7t\" (UniqueName: \"kubernetes.io/projected/f34bb765-0998-45ea-bb61-9fbbc2c7359d-kube-api-access-r9f7t\") pod \"neutron-db-create-mj8rv\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:36.999741 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kv5p\" (UniqueName: \"kubernetes.io/projected/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-kube-api-access-7kv5p\") pod \"neutron-e433-account-create-update-qm5sx\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:36.999882 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-operator-scripts\") pod \"neutron-e433-account-create-update-qm5sx\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.000846 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-operator-scripts\") pod \"neutron-e433-account-create-update-qm5sx\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.028344 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.049090 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.071555 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kv5p\" (UniqueName: \"kubernetes.io/projected/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-kube-api-access-7kv5p\") pod \"neutron-e433-account-create-update-qm5sx\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.157708 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.385230 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-b8qfq"] Jan 29 17:05:37 crc kubenswrapper[4886]: W0129 17:05:37.432602 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod219e979e_b3a8_42d0_8f23_737a86a2aefb.slice/crio-4406b94675c6c7ae9446195f8dfab310f4fa8a3adf586cc31ec4c425aaec53ea WatchSource:0}: Error finding container 4406b94675c6c7ae9446195f8dfab310f4fa8a3adf586cc31ec4c425aaec53ea: Status 404 returned error can't find the container with id 4406b94675c6c7ae9446195f8dfab310f4fa8a3adf586cc31ec4c425aaec53ea Jan 29 17:05:37 crc kubenswrapper[4886]: E0129 17:05:37.455140 4886 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.174:39510->38.129.56.174:35269: write tcp 38.129.56.174:39510->38.129.56.174:35269: write: broken pipe Jan 29 17:05:37 crc kubenswrapper[4886]: W0129 17:05:37.821396 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca25333_29b2_4c38_9e85_ebd2a0d593d6.slice/crio-cd4898dfd3366424ff76daf2236da5aa1109f2d2ee7053756e696c5c71f74315 WatchSource:0}: Error finding container cd4898dfd3366424ff76daf2236da5aa1109f2d2ee7053756e696c5c71f74315: Status 404 returned error can't find the container with id cd4898dfd3366424ff76daf2236da5aa1109f2d2ee7053756e696c5c71f74315 Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.821846 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5m27f"] Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.866155 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vvrp4"] Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.907355 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-bd38-account-create-update-rgmr5"] Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.935362 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-70c1-account-create-update-gwzzv"] Jan 29 17:05:37 crc kubenswrapper[4886]: W0129 17:05:37.941066 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc31fe7aa_0ad1_44ef_a748_b4f366a4d374.slice/crio-5d2dfc86002d797af59c9cb682ec219bf20ee62338a9f69385af929e1e8a81cc WatchSource:0}: Error finding container 5d2dfc86002d797af59c9cb682ec219bf20ee62338a9f69385af929e1e8a81cc: Status 404 returned error can't find the container with id 5d2dfc86002d797af59c9cb682ec219bf20ee62338a9f69385af929e1e8a81cc Jan 29 17:05:37 crc kubenswrapper[4886]: W0129 17:05:37.942208 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3dc785_5f55_49ca_8678_5105ba7e0568.slice/crio-723376c3c9f49ffb2963a000b3bd3332b032ec0a620314db2f5d4affe87fe53d WatchSource:0}: Error finding container 723376c3c9f49ffb2963a000b3bd3332b032ec0a620314db2f5d4affe87fe53d: Status 404 returned error can't find the container with id 723376c3c9f49ffb2963a000b3bd3332b032ec0a620314db2f5d4affe87fe53d Jan 29 17:05:37 crc kubenswrapper[4886]: I0129 17:05:37.954822 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8whvl"] Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.221054 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mj8rv"] Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.240372 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4501-account-create-update-hj72z"] Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.300031 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e433-account-create-update-qm5sx"] Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.344517 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vvrp4" event={"ID":"61eedb40-ed14-42aa-9751-8bedcd699260","Type":"ContainerStarted","Data":"9fec24589ec3e892ddf58d22ea6ebcc076444b7d5a5a5f362446314614208572"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.350543 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bd38-account-create-update-rgmr5" event={"ID":"c31fe7aa-0ad1-44ef-a748-b4f366a4d374","Type":"ContainerStarted","Data":"5d2dfc86002d797af59c9cb682ec219bf20ee62338a9f69385af929e1e8a81cc"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.362659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b3a2d6b-4eb5-44a2-837b-cfbe63f07107","Type":"ContainerStarted","Data":"cac4502f21828cb5ae9e53c7348f0195d428843a6c621cdd4045a212fbc7700c"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.368305 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b8qfq" event={"ID":"219e979e-b3a8-42d0-8f23-737a86a2aefb","Type":"ContainerStarted","Data":"ce7bb70d8d66605a00b65db196f138b8d093db85ba2aba770dcd073411b5b8b4"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.368370 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b8qfq" event={"ID":"219e979e-b3a8-42d0-8f23-737a86a2aefb","Type":"ContainerStarted","Data":"4406b94675c6c7ae9446195f8dfab310f4fa8a3adf586cc31ec4c425aaec53ea"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.373798 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"810d58f9bf0547af48c65900b9763c368fc3a05bc3a9ac21ac6e368c9e7f38cf"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.375655 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8whvl" event={"ID":"6c9729b7-e21b-4509-b337-618094fb2d52","Type":"ContainerStarted","Data":"5f929b6a33cac9c82c31ed28623b82d784e928ccd3655129beee8b99eab88731"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.386615 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4501-account-create-update-hj72z" event={"ID":"95df3f15-8d1d-4baf-bbb6-df4939f0d201","Type":"ContainerStarted","Data":"e0c4c5770b60c8e587eeeb148d840581349fd237cbedc0ac808c5bcb6eecdacf"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.404835 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=29.404809232 podStartE2EDuration="29.404809232s" podCreationTimestamp="2026-01-29 17:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:38.397395953 +0000 UTC m=+2621.306115235" watchObservedRunningTime="2026-01-29 17:05:38.404809232 +0000 UTC m=+2621.313528504" Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.405923 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5m27f" event={"ID":"eca25333-29b2-4c38-9e85-ebd2a0d593d6","Type":"ContainerStarted","Data":"cd4898dfd3366424ff76daf2236da5aa1109f2d2ee7053756e696c5c71f74315"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.413756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj8rv" event={"ID":"f34bb765-0998-45ea-bb61-9fbbc2c7359d","Type":"ContainerStarted","Data":"72783bbbfa79040fb4dc3f351898bfde9b1e9857733a1a00ee4d73ce0d7d9e05"} Jan 29 17:05:38 crc kubenswrapper[4886]: I0129 17:05:38.423156 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70c1-account-create-update-gwzzv" event={"ID":"2b3dc785-5f55-49ca-8678-5105ba7e0568","Type":"ContainerStarted","Data":"723376c3c9f49ffb2963a000b3bd3332b032ec0a620314db2f5d4affe87fe53d"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.408577 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.409532 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.434967 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4501-account-create-update-hj72z" event={"ID":"95df3f15-8d1d-4baf-bbb6-df4939f0d201","Type":"ContainerStarted","Data":"05a52ecdbf485c6c724d9a992c69aca83958ea1704df0dac8409ddf6fbc7b4d1"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.437055 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bd38-account-create-update-rgmr5" event={"ID":"c31fe7aa-0ad1-44ef-a748-b4f366a4d374","Type":"ContainerStarted","Data":"1b2a63dcfed7450a36197cbdc154c29e365ef6be50e63a79bd321d9e35afd21f"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.439286 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vvrp4" event={"ID":"61eedb40-ed14-42aa-9751-8bedcd699260","Type":"ContainerStarted","Data":"9211a739518fb120e2bda32757d910dcbc67d03a2ddbfea02f5bc9964d2f0a2d"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.441318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e433-account-create-update-qm5sx" event={"ID":"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff","Type":"ContainerStarted","Data":"c6fd592bb372f4bd56073a5709a8ef40ff848343cbd26b66d1e162d12eab6737"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.441387 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e433-account-create-update-qm5sx" event={"ID":"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff","Type":"ContainerStarted","Data":"dda352e99ae8511daf9d45b3e13077ccd37a0c2ef1768700d23fc09ac829a3b5"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.444427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5m27f" event={"ID":"eca25333-29b2-4c38-9e85-ebd2a0d593d6","Type":"ContainerStarted","Data":"c217cd04d2dba654b23c94e4b5b9acb5912a4546fafe4781e26a2d0d53058004"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.446622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70c1-account-create-update-gwzzv" event={"ID":"2b3dc785-5f55-49ca-8678-5105ba7e0568","Type":"ContainerStarted","Data":"e61c63ed7fdb0d740a758c779dfae1d17126672ffa65adff6cc5cd29f6bcc51c"} Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.473060 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-b8qfq" podStartSLOduration=4.47304162 podStartE2EDuration="4.47304162s" podCreationTimestamp="2026-01-29 17:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:39.46277419 +0000 UTC m=+2622.371493492" watchObservedRunningTime="2026-01-29 17:05:39.47304162 +0000 UTC m=+2622.381760892" Jan 29 17:05:39 crc kubenswrapper[4886]: I0129 17:05:39.549381 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.467373 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"756f69fd1c861029e4c8a391947b2f55ba605273bdd0554f8bef49cbf66dc04d"} Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.484653 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj8rv" event={"ID":"f34bb765-0998-45ea-bb61-9fbbc2c7359d","Type":"ContainerStarted","Data":"78746abbdca4d80f0a57707d5af0310c508403ee469b611bd3861cf01570354a"} Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.498781 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-4501-account-create-update-hj72z" podStartSLOduration=4.498760079 podStartE2EDuration="4.498760079s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.497627217 +0000 UTC m=+2623.406346489" watchObservedRunningTime="2026-01-29 17:05:40.498760079 +0000 UTC m=+2623.407479351" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.499990 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.535557 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mj8rv" podStartSLOduration=4.535533485 podStartE2EDuration="4.535533485s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.515534891 +0000 UTC m=+2623.424254173" watchObservedRunningTime="2026-01-29 17:05:40.535533485 +0000 UTC m=+2623.444252757" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.540758 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-e433-account-create-update-qm5sx" podStartSLOduration=4.540748111 podStartE2EDuration="4.540748111s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.53644736 +0000 UTC m=+2623.445166642" watchObservedRunningTime="2026-01-29 17:05:40.540748111 +0000 UTC m=+2623.449467383" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.566057 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-70c1-account-create-update-gwzzv" podStartSLOduration=4.566031234 podStartE2EDuration="4.566031234s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.549183249 +0000 UTC m=+2623.457902521" watchObservedRunningTime="2026-01-29 17:05:40.566031234 +0000 UTC m=+2623.474750506" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.582917 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-bd38-account-create-update-rgmr5" podStartSLOduration=4.582894449 podStartE2EDuration="4.582894449s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.577350902 +0000 UTC m=+2623.486070174" watchObservedRunningTime="2026-01-29 17:05:40.582894449 +0000 UTC m=+2623.491613721" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.597479 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-vvrp4" podStartSLOduration=4.597458519 podStartE2EDuration="4.597458519s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.591707577 +0000 UTC m=+2623.500426869" watchObservedRunningTime="2026-01-29 17:05:40.597458519 +0000 UTC m=+2623.506177791" Jan 29 17:05:40 crc kubenswrapper[4886]: I0129 17:05:40.630579 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-5m27f" podStartSLOduration=4.630542961 podStartE2EDuration="4.630542961s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:40.606784901 +0000 UTC m=+2623.515504173" watchObservedRunningTime="2026-01-29 17:05:40.630542961 +0000 UTC m=+2623.539262233" Jan 29 17:05:41 crc kubenswrapper[4886]: I0129 17:05:41.500019 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"1bd7c804046c935666c5c31215dfb2339d74de5eb7be720b59ecc3c3a7162026"} Jan 29 17:05:41 crc kubenswrapper[4886]: I0129 17:05:41.500654 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"508700665a3990bb13f200c3b8750ea2e16465f0fcff9c608e221b69f0ace0f8"} Jan 29 17:05:44 crc kubenswrapper[4886]: I0129 17:05:44.548352 4886 generic.go:334] "Generic (PLEG): container finished" podID="219e979e-b3a8-42d0-8f23-737a86a2aefb" containerID="ce7bb70d8d66605a00b65db196f138b8d093db85ba2aba770dcd073411b5b8b4" exitCode=0 Jan 29 17:05:44 crc kubenswrapper[4886]: I0129 17:05:44.548442 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b8qfq" event={"ID":"219e979e-b3a8-42d0-8f23-737a86a2aefb","Type":"ContainerDied","Data":"ce7bb70d8d66605a00b65db196f138b8d093db85ba2aba770dcd073411b5b8b4"} Jan 29 17:05:44 crc kubenswrapper[4886]: I0129 17:05:44.555447 4886 generic.go:334] "Generic (PLEG): container finished" podID="61eedb40-ed14-42aa-9751-8bedcd699260" containerID="9211a739518fb120e2bda32757d910dcbc67d03a2ddbfea02f5bc9964d2f0a2d" exitCode=0 Jan 29 17:05:44 crc kubenswrapper[4886]: I0129 17:05:44.555513 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vvrp4" event={"ID":"61eedb40-ed14-42aa-9751-8bedcd699260","Type":"ContainerDied","Data":"9211a739518fb120e2bda32757d910dcbc67d03a2ddbfea02f5bc9964d2f0a2d"} Jan 29 17:05:44 crc kubenswrapper[4886]: I0129 17:05:44.557517 4886 generic.go:334] "Generic (PLEG): container finished" podID="eca25333-29b2-4c38-9e85-ebd2a0d593d6" containerID="c217cd04d2dba654b23c94e4b5b9acb5912a4546fafe4781e26a2d0d53058004" exitCode=0 Jan 29 17:05:44 crc kubenswrapper[4886]: I0129 17:05:44.557556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5m27f" event={"ID":"eca25333-29b2-4c38-9e85-ebd2a0d593d6","Type":"ContainerDied","Data":"c217cd04d2dba654b23c94e4b5b9acb5912a4546fafe4781e26a2d0d53058004"} Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.218971 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.225519 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.234586 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.347797 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrmtg\" (UniqueName: \"kubernetes.io/projected/219e979e-b3a8-42d0-8f23-737a86a2aefb-kube-api-access-qrmtg\") pod \"219e979e-b3a8-42d0-8f23-737a86a2aefb\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.347905 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca25333-29b2-4c38-9e85-ebd2a0d593d6-operator-scripts\") pod \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.347949 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61eedb40-ed14-42aa-9751-8bedcd699260-operator-scripts\") pod \"61eedb40-ed14-42aa-9751-8bedcd699260\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.348107 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8fng\" (UniqueName: \"kubernetes.io/projected/61eedb40-ed14-42aa-9751-8bedcd699260-kube-api-access-r8fng\") pod \"61eedb40-ed14-42aa-9751-8bedcd699260\" (UID: \"61eedb40-ed14-42aa-9751-8bedcd699260\") " Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.348223 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8zm4\" (UniqueName: \"kubernetes.io/projected/eca25333-29b2-4c38-9e85-ebd2a0d593d6-kube-api-access-c8zm4\") pod \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\" (UID: \"eca25333-29b2-4c38-9e85-ebd2a0d593d6\") " Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.348270 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/219e979e-b3a8-42d0-8f23-737a86a2aefb-operator-scripts\") pod \"219e979e-b3a8-42d0-8f23-737a86a2aefb\" (UID: \"219e979e-b3a8-42d0-8f23-737a86a2aefb\") " Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.348907 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219e979e-b3a8-42d0-8f23-737a86a2aefb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "219e979e-b3a8-42d0-8f23-737a86a2aefb" (UID: "219e979e-b3a8-42d0-8f23-737a86a2aefb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.348925 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61eedb40-ed14-42aa-9751-8bedcd699260-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61eedb40-ed14-42aa-9751-8bedcd699260" (UID: "61eedb40-ed14-42aa-9751-8bedcd699260"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.348962 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca25333-29b2-4c38-9e85-ebd2a0d593d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eca25333-29b2-4c38-9e85-ebd2a0d593d6" (UID: "eca25333-29b2-4c38-9e85-ebd2a0d593d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.349232 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca25333-29b2-4c38-9e85-ebd2a0d593d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.349258 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61eedb40-ed14-42aa-9751-8bedcd699260-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.349268 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/219e979e-b3a8-42d0-8f23-737a86a2aefb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.354025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca25333-29b2-4c38-9e85-ebd2a0d593d6-kube-api-access-c8zm4" (OuterVolumeSpecName: "kube-api-access-c8zm4") pod "eca25333-29b2-4c38-9e85-ebd2a0d593d6" (UID: "eca25333-29b2-4c38-9e85-ebd2a0d593d6"). InnerVolumeSpecName "kube-api-access-c8zm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.354432 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219e979e-b3a8-42d0-8f23-737a86a2aefb-kube-api-access-qrmtg" (OuterVolumeSpecName: "kube-api-access-qrmtg") pod "219e979e-b3a8-42d0-8f23-737a86a2aefb" (UID: "219e979e-b3a8-42d0-8f23-737a86a2aefb"). InnerVolumeSpecName "kube-api-access-qrmtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.354539 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61eedb40-ed14-42aa-9751-8bedcd699260-kube-api-access-r8fng" (OuterVolumeSpecName: "kube-api-access-r8fng") pod "61eedb40-ed14-42aa-9751-8bedcd699260" (UID: "61eedb40-ed14-42aa-9751-8bedcd699260"). InnerVolumeSpecName "kube-api-access-r8fng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.451662 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8fng\" (UniqueName: \"kubernetes.io/projected/61eedb40-ed14-42aa-9751-8bedcd699260-kube-api-access-r8fng\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.451694 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8zm4\" (UniqueName: \"kubernetes.io/projected/eca25333-29b2-4c38-9e85-ebd2a0d593d6-kube-api-access-c8zm4\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.451704 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrmtg\" (UniqueName: \"kubernetes.io/projected/219e979e-b3a8-42d0-8f23-737a86a2aefb-kube-api-access-qrmtg\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.587370 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b8qfq" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.587997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b8qfq" event={"ID":"219e979e-b3a8-42d0-8f23-737a86a2aefb","Type":"ContainerDied","Data":"4406b94675c6c7ae9446195f8dfab310f4fa8a3adf586cc31ec4c425aaec53ea"} Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.588149 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4406b94675c6c7ae9446195f8dfab310f4fa8a3adf586cc31ec4c425aaec53ea" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.592303 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vvrp4" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.592301 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vvrp4" event={"ID":"61eedb40-ed14-42aa-9751-8bedcd699260","Type":"ContainerDied","Data":"9fec24589ec3e892ddf58d22ea6ebcc076444b7d5a5a5f362446314614208572"} Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.592464 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fec24589ec3e892ddf58d22ea6ebcc076444b7d5a5a5f362446314614208572" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.599565 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5m27f" event={"ID":"eca25333-29b2-4c38-9e85-ebd2a0d593d6","Type":"ContainerDied","Data":"cd4898dfd3366424ff76daf2236da5aa1109f2d2ee7053756e696c5c71f74315"} Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.599610 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4898dfd3366424ff76daf2236da5aa1109f2d2ee7053756e696c5c71f74315" Jan 29 17:05:46 crc kubenswrapper[4886]: I0129 17:05:46.599679 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5m27f" Jan 29 17:05:47 crc kubenswrapper[4886]: I0129 17:05:47.616127 4886 generic.go:334] "Generic (PLEG): container finished" podID="f34bb765-0998-45ea-bb61-9fbbc2c7359d" containerID="78746abbdca4d80f0a57707d5af0310c508403ee469b611bd3861cf01570354a" exitCode=0 Jan 29 17:05:47 crc kubenswrapper[4886]: I0129 17:05:47.616286 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj8rv" event={"ID":"f34bb765-0998-45ea-bb61-9fbbc2c7359d","Type":"ContainerDied","Data":"78746abbdca4d80f0a57707d5af0310c508403ee469b611bd3861cf01570354a"} Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.161415 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.209288 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9f7t\" (UniqueName: \"kubernetes.io/projected/f34bb765-0998-45ea-bb61-9fbbc2c7359d-kube-api-access-r9f7t\") pod \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.209439 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f34bb765-0998-45ea-bb61-9fbbc2c7359d-operator-scripts\") pod \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\" (UID: \"f34bb765-0998-45ea-bb61-9fbbc2c7359d\") " Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.210628 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34bb765-0998-45ea-bb61-9fbbc2c7359d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f34bb765-0998-45ea-bb61-9fbbc2c7359d" (UID: "f34bb765-0998-45ea-bb61-9fbbc2c7359d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.214543 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34bb765-0998-45ea-bb61-9fbbc2c7359d-kube-api-access-r9f7t" (OuterVolumeSpecName: "kube-api-access-r9f7t") pod "f34bb765-0998-45ea-bb61-9fbbc2c7359d" (UID: "f34bb765-0998-45ea-bb61-9fbbc2c7359d"). InnerVolumeSpecName "kube-api-access-r9f7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.311367 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9f7t\" (UniqueName: \"kubernetes.io/projected/f34bb765-0998-45ea-bb61-9fbbc2c7359d-kube-api-access-r9f7t\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.311746 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f34bb765-0998-45ea-bb61-9fbbc2c7359d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.639558 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mj8rv" event={"ID":"f34bb765-0998-45ea-bb61-9fbbc2c7359d","Type":"ContainerDied","Data":"72783bbbfa79040fb4dc3f351898bfde9b1e9857733a1a00ee4d73ce0d7d9e05"} Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.639602 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72783bbbfa79040fb4dc3f351898bfde9b1e9857733a1a00ee4d73ce0d7d9e05" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.639707 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mj8rv" Jan 29 17:05:49 crc kubenswrapper[4886]: I0129 17:05:49.641579 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8whvl" event={"ID":"6c9729b7-e21b-4509-b337-618094fb2d52","Type":"ContainerStarted","Data":"c0779e333572b6cd2f4e3dc26dcb63d1cb95b806d59884314b143132c6990518"} Jan 29 17:05:50 crc kubenswrapper[4886]: I0129 17:05:50.658360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"6126943d7b638f196656287460bc709c85af1650fa60f2f844a7a6f316656604"} Jan 29 17:05:50 crc kubenswrapper[4886]: I0129 17:05:50.682062 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8whvl" podStartSLOduration=4.277903578 podStartE2EDuration="14.681857912s" podCreationTimestamp="2026-01-29 17:05:36 +0000 UTC" firstStartedPulling="2026-01-29 17:05:37.978908746 +0000 UTC m=+2620.887628018" lastFinishedPulling="2026-01-29 17:05:48.38286308 +0000 UTC m=+2631.291582352" observedRunningTime="2026-01-29 17:05:50.677004385 +0000 UTC m=+2633.585723657" watchObservedRunningTime="2026-01-29 17:05:50.681857912 +0000 UTC m=+2633.590577214" Jan 29 17:05:52 crc kubenswrapper[4886]: I0129 17:05:52.696514 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"38ba0cd3468aa429dc897f4bd9147d61f68e0f9426d858466e58c1d619c3733a"} Jan 29 17:05:53 crc kubenswrapper[4886]: I0129 17:05:53.715395 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"3dee29400a52b22f7939257c24506891bbdab5055ef175281c8ec228f41e480c"} Jan 29 17:05:53 crc kubenswrapper[4886]: I0129 17:05:53.715767 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"d364556efd056c1123837a5a65a22f2ed93984242b85061596ededa15610db30"} Jan 29 17:05:53 crc kubenswrapper[4886]: I0129 17:05:53.715783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"07c28c01a5e885b091f2f4f7ce2f122664a0193a92973254f6fbae68b306e373"} Jan 29 17:05:54 crc kubenswrapper[4886]: I0129 17:05:54.732039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"fb9bdac361a70b2b4c04db09f01f5ae914f23d75caf457e2a81d51a2bfa4b8da"} Jan 29 17:05:55 crc kubenswrapper[4886]: I0129 17:05:55.747302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6e2f2c6c-bc32-4a32-ba2c-8954d277ce47","Type":"ContainerStarted","Data":"5c8bbde2c57263f7855652fcaace4af5662bbf25ad6eae81ddbdcc492a471484"} Jan 29 17:05:55 crc kubenswrapper[4886]: I0129 17:05:55.787080 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=75.927984867 podStartE2EDuration="1m44.787059653s" podCreationTimestamp="2026-01-29 17:04:11 +0000 UTC" firstStartedPulling="2026-01-29 17:05:19.526879231 +0000 UTC m=+2602.435598503" lastFinishedPulling="2026-01-29 17:05:48.385954007 +0000 UTC m=+2631.294673289" observedRunningTime="2026-01-29 17:05:55.785664694 +0000 UTC m=+2638.694384016" watchObservedRunningTime="2026-01-29 17:05:55.787059653 +0000 UTC m=+2638.695778925" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.114086 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6r9cj"] Jan 29 17:05:56 crc kubenswrapper[4886]: E0129 17:05:56.115046 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61eedb40-ed14-42aa-9751-8bedcd699260" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115068 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="61eedb40-ed14-42aa-9751-8bedcd699260" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: E0129 17:05:56.115098 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34bb765-0998-45ea-bb61-9fbbc2c7359d" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115107 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34bb765-0998-45ea-bb61-9fbbc2c7359d" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: E0129 17:05:56.115133 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219e979e-b3a8-42d0-8f23-737a86a2aefb" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115141 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="219e979e-b3a8-42d0-8f23-737a86a2aefb" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: E0129 17:05:56.115159 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca25333-29b2-4c38-9e85-ebd2a0d593d6" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115166 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca25333-29b2-4c38-9e85-ebd2a0d593d6" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115436 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca25333-29b2-4c38-9e85-ebd2a0d593d6" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115482 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="61eedb40-ed14-42aa-9751-8bedcd699260" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115500 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="219e979e-b3a8-42d0-8f23-737a86a2aefb" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.115517 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34bb765-0998-45ea-bb61-9fbbc2c7359d" containerName="mariadb-database-create" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.117178 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.125066 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6r9cj"] Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.125471 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.268212 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-config\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.268314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27r6\" (UniqueName: \"kubernetes.io/projected/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-kube-api-access-x27r6\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.268577 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.268737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.269012 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.269119 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.371346 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.371412 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.371511 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.371563 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.371619 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-config\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.371651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27r6\" (UniqueName: \"kubernetes.io/projected/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-kube-api-access-x27r6\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.372491 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.372648 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-config\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.372651 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.372714 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.373296 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.461545 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27r6\" (UniqueName: \"kubernetes.io/projected/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-kube-api-access-x27r6\") pod \"dnsmasq-dns-764c5664d7-6r9cj\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.750365 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.759280 4886 generic.go:334] "Generic (PLEG): container finished" podID="c31fe7aa-0ad1-44ef-a748-b4f366a4d374" containerID="1b2a63dcfed7450a36197cbdc154c29e365ef6be50e63a79bd321d9e35afd21f" exitCode=0 Jan 29 17:05:56 crc kubenswrapper[4886]: I0129 17:05:56.759381 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bd38-account-create-update-rgmr5" event={"ID":"c31fe7aa-0ad1-44ef-a748-b4f366a4d374","Type":"ContainerDied","Data":"1b2a63dcfed7450a36197cbdc154c29e365ef6be50e63a79bd321d9e35afd21f"} Jan 29 17:05:57 crc kubenswrapper[4886]: W0129 17:05:57.413566 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ebe69f9_b35b_47a6_976d_bca3b8b8af25.slice/crio-8a5d3dfd30af2f5ac812c053e6d3808dbffd8286368baff784598dc2a9536f00 WatchSource:0}: Error finding container 8a5d3dfd30af2f5ac812c053e6d3808dbffd8286368baff784598dc2a9536f00: Status 404 returned error can't find the container with id 8a5d3dfd30af2f5ac812c053e6d3808dbffd8286368baff784598dc2a9536f00 Jan 29 17:05:57 crc kubenswrapper[4886]: I0129 17:05:57.415982 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6r9cj"] Jan 29 17:05:57 crc kubenswrapper[4886]: I0129 17:05:57.780246 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" event={"ID":"0ebe69f9-b35b-47a6-976d-bca3b8b8af25","Type":"ContainerStarted","Data":"8a5d3dfd30af2f5ac812c053e6d3808dbffd8286368baff784598dc2a9536f00"} Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.173318 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.314796 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-operator-scripts\") pod \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.315231 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lv2x\" (UniqueName: \"kubernetes.io/projected/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-kube-api-access-6lv2x\") pod \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\" (UID: \"c31fe7aa-0ad1-44ef-a748-b4f366a4d374\") " Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.315883 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c31fe7aa-0ad1-44ef-a748-b4f366a4d374" (UID: "c31fe7aa-0ad1-44ef-a748-b4f366a4d374"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.321081 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-kube-api-access-6lv2x" (OuterVolumeSpecName: "kube-api-access-6lv2x") pod "c31fe7aa-0ad1-44ef-a748-b4f366a4d374" (UID: "c31fe7aa-0ad1-44ef-a748-b4f366a4d374"). InnerVolumeSpecName "kube-api-access-6lv2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.417764 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.417811 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lv2x\" (UniqueName: \"kubernetes.io/projected/c31fe7aa-0ad1-44ef-a748-b4f366a4d374-kube-api-access-6lv2x\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.789863 4886 generic.go:334] "Generic (PLEG): container finished" podID="2b3dc785-5f55-49ca-8678-5105ba7e0568" containerID="e61c63ed7fdb0d740a758c779dfae1d17126672ffa65adff6cc5cd29f6bcc51c" exitCode=0 Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.789963 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70c1-account-create-update-gwzzv" event={"ID":"2b3dc785-5f55-49ca-8678-5105ba7e0568","Type":"ContainerDied","Data":"e61c63ed7fdb0d740a758c779dfae1d17126672ffa65adff6cc5cd29f6bcc51c"} Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.791793 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerID="d79e54176b743ae62954d38e473d94b6d45be717a470bbf226985d6f28fe5bd4" exitCode=0 Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.791849 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" event={"ID":"0ebe69f9-b35b-47a6-976d-bca3b8b8af25","Type":"ContainerDied","Data":"d79e54176b743ae62954d38e473d94b6d45be717a470bbf226985d6f28fe5bd4"} Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.793123 4886 generic.go:334] "Generic (PLEG): container finished" podID="95df3f15-8d1d-4baf-bbb6-df4939f0d201" containerID="05a52ecdbf485c6c724d9a992c69aca83958ea1704df0dac8409ddf6fbc7b4d1" exitCode=0 Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.793182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4501-account-create-update-hj72z" event={"ID":"95df3f15-8d1d-4baf-bbb6-df4939f0d201","Type":"ContainerDied","Data":"05a52ecdbf485c6c724d9a992c69aca83958ea1704df0dac8409ddf6fbc7b4d1"} Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.834666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bd38-account-create-update-rgmr5" event={"ID":"c31fe7aa-0ad1-44ef-a748-b4f366a4d374","Type":"ContainerDied","Data":"5d2dfc86002d797af59c9cb682ec219bf20ee62338a9f69385af929e1e8a81cc"} Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.834714 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d2dfc86002d797af59c9cb682ec219bf20ee62338a9f69385af929e1e8a81cc" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.834715 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bd38-account-create-update-rgmr5" Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.840830 4886 generic.go:334] "Generic (PLEG): container finished" podID="b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" containerID="c6fd592bb372f4bd56073a5709a8ef40ff848343cbd26b66d1e162d12eab6737" exitCode=0 Jan 29 17:05:58 crc kubenswrapper[4886]: I0129 17:05:58.840883 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e433-account-create-update-qm5sx" event={"ID":"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff","Type":"ContainerDied","Data":"c6fd592bb372f4bd56073a5709a8ef40ff848343cbd26b66d1e162d12eab6737"} Jan 29 17:05:59 crc kubenswrapper[4886]: I0129 17:05:59.853777 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" event={"ID":"0ebe69f9-b35b-47a6-976d-bca3b8b8af25","Type":"ContainerStarted","Data":"9d62c141d557ad4f511cc99617ca7914a9fcfe251f2f34d5a37428a245460d8c"} Jan 29 17:05:59 crc kubenswrapper[4886]: I0129 17:05:59.856255 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:05:59 crc kubenswrapper[4886]: I0129 17:05:59.878989 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" podStartSLOduration=3.878969594 podStartE2EDuration="3.878969594s" podCreationTimestamp="2026-01-29 17:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:05:59.873389207 +0000 UTC m=+2642.782108509" watchObservedRunningTime="2026-01-29 17:05:59.878969594 +0000 UTC m=+2642.787688866" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.406831 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.414819 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.423241 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.469284 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kv5p\" (UniqueName: \"kubernetes.io/projected/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-kube-api-access-7kv5p\") pod \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.469382 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-operator-scripts\") pod \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\" (UID: \"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff\") " Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.469953 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" (UID: "b8e697ee-193d-4ce1-9905-cebf2e6ba7ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.474526 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-kube-api-access-7kv5p" (OuterVolumeSpecName: "kube-api-access-7kv5p") pod "b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" (UID: "b8e697ee-193d-4ce1-9905-cebf2e6ba7ff"). InnerVolumeSpecName "kube-api-access-7kv5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.570459 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg89r\" (UniqueName: \"kubernetes.io/projected/95df3f15-8d1d-4baf-bbb6-df4939f0d201-kube-api-access-rg89r\") pod \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.570593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df3f15-8d1d-4baf-bbb6-df4939f0d201-operator-scripts\") pod \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\" (UID: \"95df3f15-8d1d-4baf-bbb6-df4939f0d201\") " Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.570838 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc785-5f55-49ca-8678-5105ba7e0568-operator-scripts\") pod \"2b3dc785-5f55-49ca-8678-5105ba7e0568\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.570886 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhtgf\" (UniqueName: \"kubernetes.io/projected/2b3dc785-5f55-49ca-8678-5105ba7e0568-kube-api-access-lhtgf\") pod \"2b3dc785-5f55-49ca-8678-5105ba7e0568\" (UID: \"2b3dc785-5f55-49ca-8678-5105ba7e0568\") " Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.571458 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kv5p\" (UniqueName: \"kubernetes.io/projected/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-kube-api-access-7kv5p\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.571482 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.578669 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3dc785-5f55-49ca-8678-5105ba7e0568-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b3dc785-5f55-49ca-8678-5105ba7e0568" (UID: "2b3dc785-5f55-49ca-8678-5105ba7e0568"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.578827 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95df3f15-8d1d-4baf-bbb6-df4939f0d201-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95df3f15-8d1d-4baf-bbb6-df4939f0d201" (UID: "95df3f15-8d1d-4baf-bbb6-df4939f0d201"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.580571 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3dc785-5f55-49ca-8678-5105ba7e0568-kube-api-access-lhtgf" (OuterVolumeSpecName: "kube-api-access-lhtgf") pod "2b3dc785-5f55-49ca-8678-5105ba7e0568" (UID: "2b3dc785-5f55-49ca-8678-5105ba7e0568"). InnerVolumeSpecName "kube-api-access-lhtgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.590649 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95df3f15-8d1d-4baf-bbb6-df4939f0d201-kube-api-access-rg89r" (OuterVolumeSpecName: "kube-api-access-rg89r") pod "95df3f15-8d1d-4baf-bbb6-df4939f0d201" (UID: "95df3f15-8d1d-4baf-bbb6-df4939f0d201"). InnerVolumeSpecName "kube-api-access-rg89r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.673722 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhtgf\" (UniqueName: \"kubernetes.io/projected/2b3dc785-5f55-49ca-8678-5105ba7e0568-kube-api-access-lhtgf\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.673762 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg89r\" (UniqueName: \"kubernetes.io/projected/95df3f15-8d1d-4baf-bbb6-df4939f0d201-kube-api-access-rg89r\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.673771 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df3f15-8d1d-4baf-bbb6-df4939f0d201-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.673780 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc785-5f55-49ca-8678-5105ba7e0568-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.864426 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4501-account-create-update-hj72z" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.864417 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4501-account-create-update-hj72z" event={"ID":"95df3f15-8d1d-4baf-bbb6-df4939f0d201","Type":"ContainerDied","Data":"e0c4c5770b60c8e587eeeb148d840581349fd237cbedc0ac808c5bcb6eecdacf"} Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.864858 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c4c5770b60c8e587eeeb148d840581349fd237cbedc0ac808c5bcb6eecdacf" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.866334 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e433-account-create-update-qm5sx" event={"ID":"b8e697ee-193d-4ce1-9905-cebf2e6ba7ff","Type":"ContainerDied","Data":"dda352e99ae8511daf9d45b3e13077ccd37a0c2ef1768700d23fc09ac829a3b5"} Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.866372 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda352e99ae8511daf9d45b3e13077ccd37a0c2ef1768700d23fc09ac829a3b5" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.866430 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e433-account-create-update-qm5sx" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.868402 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70c1-account-create-update-gwzzv" event={"ID":"2b3dc785-5f55-49ca-8678-5105ba7e0568","Type":"ContainerDied","Data":"723376c3c9f49ffb2963a000b3bd3332b032ec0a620314db2f5d4affe87fe53d"} Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.868456 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="723376c3c9f49ffb2963a000b3bd3332b032ec0a620314db2f5d4affe87fe53d" Jan 29 17:06:00 crc kubenswrapper[4886]: I0129 17:06:00.868428 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70c1-account-create-update-gwzzv" Jan 29 17:06:01 crc kubenswrapper[4886]: I0129 17:06:01.878825 4886 generic.go:334] "Generic (PLEG): container finished" podID="6c9729b7-e21b-4509-b337-618094fb2d52" containerID="c0779e333572b6cd2f4e3dc26dcb63d1cb95b806d59884314b143132c6990518" exitCode=0 Jan 29 17:06:01 crc kubenswrapper[4886]: I0129 17:06:01.878930 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8whvl" event={"ID":"6c9729b7-e21b-4509-b337-618094fb2d52","Type":"ContainerDied","Data":"c0779e333572b6cd2f4e3dc26dcb63d1cb95b806d59884314b143132c6990518"} Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.280074 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8whvl" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.338616 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-combined-ca-bundle\") pod \"6c9729b7-e21b-4509-b337-618094fb2d52\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.338666 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxrkh\" (UniqueName: \"kubernetes.io/projected/6c9729b7-e21b-4509-b337-618094fb2d52-kube-api-access-gxrkh\") pod \"6c9729b7-e21b-4509-b337-618094fb2d52\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.338759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-config-data\") pod \"6c9729b7-e21b-4509-b337-618094fb2d52\" (UID: \"6c9729b7-e21b-4509-b337-618094fb2d52\") " Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.346792 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9729b7-e21b-4509-b337-618094fb2d52-kube-api-access-gxrkh" (OuterVolumeSpecName: "kube-api-access-gxrkh") pod "6c9729b7-e21b-4509-b337-618094fb2d52" (UID: "6c9729b7-e21b-4509-b337-618094fb2d52"). InnerVolumeSpecName "kube-api-access-gxrkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.387310 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c9729b7-e21b-4509-b337-618094fb2d52" (UID: "6c9729b7-e21b-4509-b337-618094fb2d52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.420511 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-config-data" (OuterVolumeSpecName: "config-data") pod "6c9729b7-e21b-4509-b337-618094fb2d52" (UID: "6c9729b7-e21b-4509-b337-618094fb2d52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.441000 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.441032 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9729b7-e21b-4509-b337-618094fb2d52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.441044 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxrkh\" (UniqueName: \"kubernetes.io/projected/6c9729b7-e21b-4509-b337-618094fb2d52-kube-api-access-gxrkh\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.922102 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8whvl" event={"ID":"6c9729b7-e21b-4509-b337-618094fb2d52","Type":"ContainerDied","Data":"5f929b6a33cac9c82c31ed28623b82d784e928ccd3655129beee8b99eab88731"} Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.922161 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f929b6a33cac9c82c31ed28623b82d784e928ccd3655129beee8b99eab88731" Jan 29 17:06:03 crc kubenswrapper[4886]: I0129 17:06:03.922626 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8whvl" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.162756 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6r9cj"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.163027 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerName="dnsmasq-dns" containerID="cri-o://9d62c141d557ad4f511cc99617ca7914a9fcfe251f2f34d5a37428a245460d8c" gracePeriod=10 Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.165591 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.208800 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b5c9h"] Jan 29 17:06:04 crc kubenswrapper[4886]: E0129 17:06:04.209461 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209488 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: E0129 17:06:04.209509 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3dc785-5f55-49ca-8678-5105ba7e0568" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209517 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3dc785-5f55-49ca-8678-5105ba7e0568" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: E0129 17:06:04.209564 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9729b7-e21b-4509-b337-618094fb2d52" containerName="keystone-db-sync" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209572 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9729b7-e21b-4509-b337-618094fb2d52" containerName="keystone-db-sync" Jan 29 17:06:04 crc kubenswrapper[4886]: E0129 17:06:04.209587 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31fe7aa-0ad1-44ef-a748-b4f366a4d374" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209595 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31fe7aa-0ad1-44ef-a748-b4f366a4d374" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: E0129 17:06:04.209613 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95df3f15-8d1d-4baf-bbb6-df4939f0d201" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209620 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="95df3f15-8d1d-4baf-bbb6-df4939f0d201" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209899 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9729b7-e21b-4509-b337-618094fb2d52" containerName="keystone-db-sync" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209926 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31fe7aa-0ad1-44ef-a748-b4f366a4d374" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209943 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="95df3f15-8d1d-4baf-bbb6-df4939f0d201" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209965 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3dc785-5f55-49ca-8678-5105ba7e0568" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.209980 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" containerName="mariadb-account-create-update" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.213123 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.220052 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.221133 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.235435 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.235908 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k5qcd" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.236154 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.258398 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5c9h"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.264196 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9fhh\" (UniqueName: \"kubernetes.io/projected/676a9025-a673-4a70-aa9d-ec34c1db17be-kube-api-access-n9fhh\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.264257 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-combined-ca-bundle\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.264409 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-fernet-keys\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.264457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-config-data\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.264482 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-scripts\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.264829 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-credential-keys\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.289823 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8962p"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.292098 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.308380 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8962p"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.346742 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-6nmwn"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.348408 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.351004 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.351341 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-658st" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.371126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-fernet-keys\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.371191 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-config-data\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.374594 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-scripts\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.374668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjk8f\" (UniqueName: \"kubernetes.io/projected/1fca7a19-7db1-4a2e-9f55-d55442cfda87-kube-api-access-kjk8f\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.374746 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-config\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375134 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-credential-keys\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375276 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375377 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375579 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9fhh\" (UniqueName: \"kubernetes.io/projected/676a9025-a673-4a70-aa9d-ec34c1db17be-kube-api-access-n9fhh\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375620 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-combined-ca-bundle\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375695 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.375731 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-svc\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.396749 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-scripts\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.396957 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-config-data\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.397752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-fernet-keys\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.398964 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-combined-ca-bundle\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.408856 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9fhh\" (UniqueName: \"kubernetes.io/projected/676a9025-a673-4a70-aa9d-ec34c1db17be-kube-api-access-n9fhh\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.412461 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-credential-keys\") pod \"keystone-bootstrap-b5c9h\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.439028 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6nmwn"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.492783 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.492845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.492892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v7hl\" (UniqueName: \"kubernetes.io/projected/a0058f32-ae80-4dde-9dce-095c62f45979-kube-api-access-9v7hl\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.493066 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.493097 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-svc\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.493114 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-combined-ca-bundle\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.493186 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjk8f\" (UniqueName: \"kubernetes.io/projected/1fca7a19-7db1-4a2e-9f55-d55442cfda87-kube-api-access-kjk8f\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.493224 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-config\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.493256 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-config-data\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.494020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-svc\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.494141 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.494811 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-config\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.494937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.495298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.578911 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.603375 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v7hl\" (UniqueName: \"kubernetes.io/projected/a0058f32-ae80-4dde-9dce-095c62f45979-kube-api-access-9v7hl\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.603508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-combined-ca-bundle\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.603600 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-config-data\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.624452 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-config-data\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.625252 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjk8f\" (UniqueName: \"kubernetes.io/projected/1fca7a19-7db1-4a2e-9f55-d55442cfda87-kube-api-access-kjk8f\") pod \"dnsmasq-dns-5959f8865f-8962p\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.642269 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-combined-ca-bundle\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.651786 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qglhp"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.653282 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.673804 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.674001 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wvjgr" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.674639 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.690107 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v7hl\" (UniqueName: \"kubernetes.io/projected/a0058f32-ae80-4dde-9dce-095c62f45979-kube-api-access-9v7hl\") pod \"heat-db-sync-6nmwn\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.705383 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qglhp"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.706897 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-config\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.706980 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-combined-ca-bundle\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.707086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvgz\" (UniqueName: \"kubernetes.io/projected/43da0665-7e6a-4176-ae84-71128a89a243-kube-api-access-vkvgz\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.748871 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-j5gfz"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.757302 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.760344 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.760400 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.774355 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ldtkt" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.795593 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j5gfz"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.796205 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809014 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04dae116-ceca-4588-9cba-1266bfa92caf-etc-machine-id\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809083 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-combined-ca-bundle\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rkdq\" (UniqueName: \"kubernetes.io/projected/04dae116-ceca-4588-9cba-1266bfa92caf-kube-api-access-2rkdq\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809171 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-scripts\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809260 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-config-data\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809316 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-config\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-combined-ca-bundle\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-db-sync-config-data\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.809495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvgz\" (UniqueName: \"kubernetes.io/projected/43da0665-7e6a-4176-ae84-71128a89a243-kube-api-access-vkvgz\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.816393 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6nmwn" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.821197 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-combined-ca-bundle\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.822494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-config\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.837127 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8m2mm"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.838634 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.842867 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvgz\" (UniqueName: \"kubernetes.io/projected/43da0665-7e6a-4176-ae84-71128a89a243-kube-api-access-vkvgz\") pod \"neutron-db-sync-qglhp\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.844787 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.845110 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.845239 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mrvvt" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.869370 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-q2dxw"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.887311 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.892942 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8m2mm"] Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.928646 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5k8bj" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.937517 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.943034 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckms\" (UniqueName: \"kubernetes.io/projected/8923ac96-087a-425b-a8b4-c09aa4be3d78-kube-api-access-8ckms\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.943137 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-db-sync-config-data\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.943176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86p7n\" (UniqueName: \"kubernetes.io/projected/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-kube-api-access-86p7n\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.943782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-db-sync-config-data\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944008 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04dae116-ceca-4588-9cba-1266bfa92caf-etc-machine-id\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944048 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-combined-ca-bundle\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-combined-ca-bundle\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944423 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rkdq\" (UniqueName: \"kubernetes.io/projected/04dae116-ceca-4588-9cba-1266bfa92caf-kube-api-access-2rkdq\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944459 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-combined-ca-bundle\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-scripts\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944617 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-config-data\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944886 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-scripts\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.944958 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-config-data\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.945191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8923ac96-087a-425b-a8b4-c09aa4be3d78-logs\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.946750 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04dae116-ceca-4588-9cba-1266bfa92caf-etc-machine-id\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.952203 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-scripts\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.954552 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-config-data\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.962780 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-db-sync-config-data\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.978178 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-combined-ca-bundle\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.981046 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rkdq\" (UniqueName: \"kubernetes.io/projected/04dae116-ceca-4588-9cba-1266bfa92caf-kube-api-access-2rkdq\") pod \"cinder-db-sync-j5gfz\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:04 crc kubenswrapper[4886]: I0129 17:06:04.988707 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q2dxw"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.002025 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerID="9d62c141d557ad4f511cc99617ca7914a9fcfe251f2f34d5a37428a245460d8c" exitCode=0 Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.002071 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" event={"ID":"0ebe69f9-b35b-47a6-976d-bca3b8b8af25","Type":"ContainerDied","Data":"9d62c141d557ad4f511cc99617ca7914a9fcfe251f2f34d5a37428a245460d8c"} Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.062780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-scripts\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.062854 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-config-data\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.062996 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8923ac96-087a-425b-a8b4-c09aa4be3d78-logs\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.063102 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ckms\" (UniqueName: \"kubernetes.io/projected/8923ac96-087a-425b-a8b4-c09aa4be3d78-kube-api-access-8ckms\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.063155 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86p7n\" (UniqueName: \"kubernetes.io/projected/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-kube-api-access-86p7n\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.063261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-db-sync-config-data\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.063339 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-combined-ca-bundle\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.063394 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-combined-ca-bundle\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.064472 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8923ac96-087a-425b-a8b4-c09aa4be3d78-logs\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.091516 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-scripts\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.108983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ckms\" (UniqueName: \"kubernetes.io/projected/8923ac96-087a-425b-a8b4-c09aa4be3d78-kube-api-access-8ckms\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.116487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-combined-ca-bundle\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.116824 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-db-sync-config-data\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.117545 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-config-data\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.122607 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-combined-ca-bundle\") pod \"placement-db-sync-8m2mm\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.125824 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qglhp" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.131197 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8962p"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.141157 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86p7n\" (UniqueName: \"kubernetes.io/projected/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-kube-api-access-86p7n\") pod \"barbican-db-sync-q2dxw\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.147288 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.153149 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-5smww"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.157499 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.213932 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-5smww"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.225758 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.237434 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.238115 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8m2mm" Jan 29 17:06:05 crc kubenswrapper[4886]: E0129 17:06:05.249074 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerName="dnsmasq-dns" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.249113 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerName="dnsmasq-dns" Jan 29 17:06:05 crc kubenswrapper[4886]: E0129 17:06:05.249162 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerName="init" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.249169 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerName="init" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.249474 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" containerName="dnsmasq-dns" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.251396 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.252075 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.258182 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.258373 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.272402 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.273850 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.274106 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mjr\" (UniqueName: \"kubernetes.io/projected/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-kube-api-access-88mjr\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.274489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.274735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.275221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-config\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.305160 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.380887 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-config\") pod \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.387951 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-swift-storage-0\") pod \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.388054 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-svc\") pod \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.389182 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-sb\") pod \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.389214 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x27r6\" (UniqueName: \"kubernetes.io/projected/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-kube-api-access-x27r6\") pod \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.389340 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-nb\") pod \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\" (UID: \"0ebe69f9-b35b-47a6-976d-bca3b8b8af25\") " Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.389837 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4459b\" (UniqueName: \"kubernetes.io/projected/87986c31-37d7-4624-87a2-b5678e01d865-kube-api-access-4459b\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.389891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.389997 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390051 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-run-httpd\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-config-data\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-config\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390403 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-log-httpd\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390570 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.390782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-scripts\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.391019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.391099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.391140 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mjr\" (UniqueName: \"kubernetes.io/projected/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-kube-api-access-88mjr\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.391692 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.396050 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-config\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.396830 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.398092 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.401891 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.421535 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-kube-api-access-x27r6" (OuterVolumeSpecName: "kube-api-access-x27r6") pod "0ebe69f9-b35b-47a6-976d-bca3b8b8af25" (UID: "0ebe69f9-b35b-47a6-976d-bca3b8b8af25"). InnerVolumeSpecName "kube-api-access-x27r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.441403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mjr\" (UniqueName: \"kubernetes.io/projected/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-kube-api-access-88mjr\") pod \"dnsmasq-dns-58dd9ff6bc-5smww\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.486359 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-scripts\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498604 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4459b\" (UniqueName: \"kubernetes.io/projected/87986c31-37d7-4624-87a2-b5678e01d865-kube-api-access-4459b\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498637 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498717 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-run-httpd\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498790 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-config-data\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.498869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-log-httpd\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.503570 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-run-httpd\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.504466 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x27r6\" (UniqueName: \"kubernetes.io/projected/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-kube-api-access-x27r6\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.505066 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-log-httpd\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.514630 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-config-data\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.541006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.541672 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-scripts\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.542801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.553820 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5c9h"] Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.562234 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-config" (OuterVolumeSpecName: "config") pod "0ebe69f9-b35b-47a6-976d-bca3b8b8af25" (UID: "0ebe69f9-b35b-47a6-976d-bca3b8b8af25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.566303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4459b\" (UniqueName: \"kubernetes.io/projected/87986c31-37d7-4624-87a2-b5678e01d865-kube-api-access-4459b\") pod \"ceilometer-0\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.597798 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ebe69f9-b35b-47a6-976d-bca3b8b8af25" (UID: "0ebe69f9-b35b-47a6-976d-bca3b8b8af25"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.598830 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ebe69f9-b35b-47a6-976d-bca3b8b8af25" (UID: "0ebe69f9-b35b-47a6-976d-bca3b8b8af25"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.606448 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ebe69f9-b35b-47a6-976d-bca3b8b8af25" (UID: "0ebe69f9-b35b-47a6-976d-bca3b8b8af25"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.614407 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.615094 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.615106 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.615117 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.620804 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ebe69f9-b35b-47a6-976d-bca3b8b8af25" (UID: "0ebe69f9-b35b-47a6-976d-bca3b8b8af25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.663121 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:06:05 crc kubenswrapper[4886]: I0129 17:06:05.727427 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ebe69f9-b35b-47a6-976d-bca3b8b8af25-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.029369 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" event={"ID":"0ebe69f9-b35b-47a6-976d-bca3b8b8af25","Type":"ContainerDied","Data":"8a5d3dfd30af2f5ac812c053e6d3808dbffd8286368baff784598dc2a9536f00"} Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.029772 4886 scope.go:117] "RemoveContainer" containerID="9d62c141d557ad4f511cc99617ca7914a9fcfe251f2f34d5a37428a245460d8c" Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.029695 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6r9cj" Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.052523 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5c9h" event={"ID":"676a9025-a673-4a70-aa9d-ec34c1db17be","Type":"ContainerStarted","Data":"24f822770ac33b496012b10bfe803c315a5cfcfd68498769b1825800fd0da253"} Jan 29 17:06:06 crc kubenswrapper[4886]: W0129 17:06:06.054868 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fca7a19_7db1_4a2e_9f55_d55442cfda87.slice/crio-f0564cff8924c07ae14fa9bcf81d675ce573496e02b6b78a9d4ba5771735575e WatchSource:0}: Error finding container f0564cff8924c07ae14fa9bcf81d675ce573496e02b6b78a9d4ba5771735575e: Status 404 returned error can't find the container with id f0564cff8924c07ae14fa9bcf81d675ce573496e02b6b78a9d4ba5771735575e Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.059168 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8962p"] Jan 29 17:06:06 crc kubenswrapper[4886]: W0129 17:06:06.091037 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0058f32_ae80_4dde_9dce_095c62f45979.slice/crio-d9df74376035a2b4e196d856e8d76469a75a91514ac671f314bd4926926ee2e3 WatchSource:0}: Error finding container d9df74376035a2b4e196d856e8d76469a75a91514ac671f314bd4926926ee2e3: Status 404 returned error can't find the container with id d9df74376035a2b4e196d856e8d76469a75a91514ac671f314bd4926926ee2e3 Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.109837 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6nmwn"] Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.151660 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6r9cj"] Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.179225 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6r9cj"] Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.181903 4886 scope.go:117] "RemoveContainer" containerID="d79e54176b743ae62954d38e473d94b6d45be717a470bbf226985d6f28fe5bd4" Jan 29 17:06:06 crc kubenswrapper[4886]: W0129 17:06:06.283291 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43da0665_7e6a_4176_ae84_71128a89a243.slice/crio-466198a6dbe8073f38dde3862e5bfda50e204a4fc5dd98f6c616c1e63cc8d1a0 WatchSource:0}: Error finding container 466198a6dbe8073f38dde3862e5bfda50e204a4fc5dd98f6c616c1e63cc8d1a0: Status 404 returned error can't find the container with id 466198a6dbe8073f38dde3862e5bfda50e204a4fc5dd98f6c616c1e63cc8d1a0 Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.287575 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qglhp"] Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.332485 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j5gfz"] Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.553013 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8m2mm"] Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.638025 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebe69f9-b35b-47a6-976d-bca3b8b8af25" path="/var/lib/kubelet/pods/0ebe69f9-b35b-47a6-976d-bca3b8b8af25/volumes" Jan 29 17:06:06 crc kubenswrapper[4886]: I0129 17:06:06.943481 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.040293 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q2dxw"] Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.052952 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-5smww"] Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.074711 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.104817 4886 generic.go:334] "Generic (PLEG): container finished" podID="1fca7a19-7db1-4a2e-9f55-d55442cfda87" containerID="850b39de005465a0ca176b82210b0557b234cba9ae1cd5ffefbe61ffc7abab5e" exitCode=0 Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.104997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8962p" event={"ID":"1fca7a19-7db1-4a2e-9f55-d55442cfda87","Type":"ContainerDied","Data":"850b39de005465a0ca176b82210b0557b234cba9ae1cd5ffefbe61ffc7abab5e"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.105076 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8962p" event={"ID":"1fca7a19-7db1-4a2e-9f55-d55442cfda87","Type":"ContainerStarted","Data":"f0564cff8924c07ae14fa9bcf81d675ce573496e02b6b78a9d4ba5771735575e"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.108853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" event={"ID":"2c74b25a-0daf-4c7e-a023-a7082d8d73cf","Type":"ContainerStarted","Data":"02d41ab973396ad0b9067fb7d12dd022b4232ab3e2460c195caa3ce7c6f4e250"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.115669 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qglhp" event={"ID":"43da0665-7e6a-4176-ae84-71128a89a243","Type":"ContainerStarted","Data":"c4ce1f7996acaa4140e3f499ede2bc0c80a3f2eb7c1df999e0b4f5903e1d75cf"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.115718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qglhp" event={"ID":"43da0665-7e6a-4176-ae84-71128a89a243","Type":"ContainerStarted","Data":"466198a6dbe8073f38dde3862e5bfda50e204a4fc5dd98f6c616c1e63cc8d1a0"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.134512 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6nmwn" event={"ID":"a0058f32-ae80-4dde-9dce-095c62f45979","Type":"ContainerStarted","Data":"d9df74376035a2b4e196d856e8d76469a75a91514ac671f314bd4926926ee2e3"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.159647 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qglhp" podStartSLOduration=3.159624139 podStartE2EDuration="3.159624139s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:07.157410897 +0000 UTC m=+2650.066130189" watchObservedRunningTime="2026-01-29 17:06:07.159624139 +0000 UTC m=+2650.068343421" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.171451 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5c9h" event={"ID":"676a9025-a673-4a70-aa9d-ec34c1db17be","Type":"ContainerStarted","Data":"9b68510df598b451ff2d4faad4a0af1636831487ecf72ad66ce874c635cd8d9e"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.188175 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8m2mm" event={"ID":"8923ac96-087a-425b-a8b4-c09aa4be3d78","Type":"ContainerStarted","Data":"7ba3dd51612ec84b7435debfb27c88330b100c1320a10e3e0bea0e482e076cd8"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.197108 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j5gfz" event={"ID":"04dae116-ceca-4588-9cba-1266bfa92caf","Type":"ContainerStarted","Data":"3d72bfc601ef7f8aa44a162e8a49bc717daf618d327e886ac546527a7c3a7e17"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.199251 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q2dxw" event={"ID":"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd","Type":"ContainerStarted","Data":"474a2d0d1c07609e70e6ff2d358c4e7ec5598344e910e4e2e3ec3d713255b48d"} Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.223148 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b5c9h" podStartSLOduration=3.223125538 podStartE2EDuration="3.223125538s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:07.191658671 +0000 UTC m=+2650.100377963" watchObservedRunningTime="2026-01-29 17:06:07.223125538 +0000 UTC m=+2650.131844810" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.709054 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.721537 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-config\") pod \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.721606 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-nb\") pod \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.721632 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-svc\") pod \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.721650 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-sb\") pod \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.756751 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fca7a19-7db1-4a2e-9f55-d55442cfda87" (UID: "1fca7a19-7db1-4a2e-9f55-d55442cfda87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.759387 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fca7a19-7db1-4a2e-9f55-d55442cfda87" (UID: "1fca7a19-7db1-4a2e-9f55-d55442cfda87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.766026 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fca7a19-7db1-4a2e-9f55-d55442cfda87" (UID: "1fca7a19-7db1-4a2e-9f55-d55442cfda87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.800900 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-config" (OuterVolumeSpecName: "config") pod "1fca7a19-7db1-4a2e-9f55-d55442cfda87" (UID: "1fca7a19-7db1-4a2e-9f55-d55442cfda87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.823577 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-swift-storage-0\") pod \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.823695 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjk8f\" (UniqueName: \"kubernetes.io/projected/1fca7a19-7db1-4a2e-9f55-d55442cfda87-kube-api-access-kjk8f\") pod \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\" (UID: \"1fca7a19-7db1-4a2e-9f55-d55442cfda87\") " Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.824261 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.824297 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.824312 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.824336 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.827040 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fca7a19-7db1-4a2e-9f55-d55442cfda87-kube-api-access-kjk8f" (OuterVolumeSpecName: "kube-api-access-kjk8f") pod "1fca7a19-7db1-4a2e-9f55-d55442cfda87" (UID: "1fca7a19-7db1-4a2e-9f55-d55442cfda87"). InnerVolumeSpecName "kube-api-access-kjk8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.855851 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fca7a19-7db1-4a2e-9f55-d55442cfda87" (UID: "1fca7a19-7db1-4a2e-9f55-d55442cfda87"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.928496 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fca7a19-7db1-4a2e-9f55-d55442cfda87-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:07 crc kubenswrapper[4886]: I0129 17:06:07.928541 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjk8f\" (UniqueName: \"kubernetes.io/projected/1fca7a19-7db1-4a2e-9f55-d55442cfda87-kube-api-access-kjk8f\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.218571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8962p" event={"ID":"1fca7a19-7db1-4a2e-9f55-d55442cfda87","Type":"ContainerDied","Data":"f0564cff8924c07ae14fa9bcf81d675ce573496e02b6b78a9d4ba5771735575e"} Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.218622 4886 scope.go:117] "RemoveContainer" containerID="850b39de005465a0ca176b82210b0557b234cba9ae1cd5ffefbe61ffc7abab5e" Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.219136 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8962p" Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.228655 4886 generic.go:334] "Generic (PLEG): container finished" podID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerID="cfb7fc79ff5a728a120650052bd3ff240e06f929a54c3a2f5efc1ad8f2dd226b" exitCode=0 Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.228702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" event={"ID":"2c74b25a-0daf-4c7e-a023-a7082d8d73cf","Type":"ContainerDied","Data":"cfb7fc79ff5a728a120650052bd3ff240e06f929a54c3a2f5efc1ad8f2dd226b"} Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.241786 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerStarted","Data":"3e6ce925c7e7561fcefff1c9869e186415899419d2d1d24db82a0097aea34d23"} Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.316169 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8962p"] Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.326620 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8962p"] Jan 29 17:06:08 crc kubenswrapper[4886]: I0129 17:06:08.654076 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fca7a19-7db1-4a2e-9f55-d55442cfda87" path="/var/lib/kubelet/pods/1fca7a19-7db1-4a2e-9f55-d55442cfda87/volumes" Jan 29 17:06:09 crc kubenswrapper[4886]: I0129 17:06:09.254844 4886 generic.go:334] "Generic (PLEG): container finished" podID="9f114908-5594-4378-939f-f54b2157d676" containerID="76e9fd9551f88713599d793f819bec47fc38185510d47fbd152e0939943ac037" exitCode=0 Jan 29 17:06:09 crc kubenswrapper[4886]: I0129 17:06:09.254898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-thqn5" event={"ID":"9f114908-5594-4378-939f-f54b2157d676","Type":"ContainerDied","Data":"76e9fd9551f88713599d793f819bec47fc38185510d47fbd152e0939943ac037"} Jan 29 17:06:09 crc kubenswrapper[4886]: I0129 17:06:09.258497 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" event={"ID":"2c74b25a-0daf-4c7e-a023-a7082d8d73cf","Type":"ContainerStarted","Data":"2b118af4cda69e6639958e45442cbb3e2fb4932b299bff9387ea1c20cb9f4e45"} Jan 29 17:06:09 crc kubenswrapper[4886]: I0129 17:06:09.258932 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:09 crc kubenswrapper[4886]: I0129 17:06:09.334398 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podStartSLOduration=5.334370352 podStartE2EDuration="5.334370352s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:09.322106416 +0000 UTC m=+2652.230825698" watchObservedRunningTime="2026-01-29 17:06:09.334370352 +0000 UTC m=+2652.243089624" Jan 29 17:06:11 crc kubenswrapper[4886]: I0129 17:06:11.280005 4886 generic.go:334] "Generic (PLEG): container finished" podID="676a9025-a673-4a70-aa9d-ec34c1db17be" containerID="9b68510df598b451ff2d4faad4a0af1636831487ecf72ad66ce874c635cd8d9e" exitCode=0 Jan 29 17:06:11 crc kubenswrapper[4886]: I0129 17:06:11.280066 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5c9h" event={"ID":"676a9025-a673-4a70-aa9d-ec34c1db17be","Type":"ContainerDied","Data":"9b68510df598b451ff2d4faad4a0af1636831487ecf72ad66ce874c635cd8d9e"} Jan 29 17:06:12 crc kubenswrapper[4886]: I0129 17:06:12.995267 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.003523 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-thqn5" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165295 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9fhh\" (UniqueName: \"kubernetes.io/projected/676a9025-a673-4a70-aa9d-ec34c1db17be-kube-api-access-n9fhh\") pod \"676a9025-a673-4a70-aa9d-ec34c1db17be\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165418 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-combined-ca-bundle\") pod \"9f114908-5594-4378-939f-f54b2157d676\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165470 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-config-data\") pod \"9f114908-5594-4378-939f-f54b2157d676\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165553 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-credential-keys\") pod \"676a9025-a673-4a70-aa9d-ec34c1db17be\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-fernet-keys\") pod \"676a9025-a673-4a70-aa9d-ec34c1db17be\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165706 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-scripts\") pod \"676a9025-a673-4a70-aa9d-ec34c1db17be\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165736 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c7r8\" (UniqueName: \"kubernetes.io/projected/9f114908-5594-4378-939f-f54b2157d676-kube-api-access-6c7r8\") pod \"9f114908-5594-4378-939f-f54b2157d676\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165757 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-combined-ca-bundle\") pod \"676a9025-a673-4a70-aa9d-ec34c1db17be\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165807 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-db-sync-config-data\") pod \"9f114908-5594-4378-939f-f54b2157d676\" (UID: \"9f114908-5594-4378-939f-f54b2157d676\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.165880 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-config-data\") pod \"676a9025-a673-4a70-aa9d-ec34c1db17be\" (UID: \"676a9025-a673-4a70-aa9d-ec34c1db17be\") " Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.171833 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f114908-5594-4378-939f-f54b2157d676-kube-api-access-6c7r8" (OuterVolumeSpecName: "kube-api-access-6c7r8") pod "9f114908-5594-4378-939f-f54b2157d676" (UID: "9f114908-5594-4378-939f-f54b2157d676"). InnerVolumeSpecName "kube-api-access-6c7r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.173449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "676a9025-a673-4a70-aa9d-ec34c1db17be" (UID: "676a9025-a673-4a70-aa9d-ec34c1db17be"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.175270 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9f114908-5594-4378-939f-f54b2157d676" (UID: "9f114908-5594-4378-939f-f54b2157d676"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.176490 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "676a9025-a673-4a70-aa9d-ec34c1db17be" (UID: "676a9025-a673-4a70-aa9d-ec34c1db17be"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.180174 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676a9025-a673-4a70-aa9d-ec34c1db17be-kube-api-access-n9fhh" (OuterVolumeSpecName: "kube-api-access-n9fhh") pod "676a9025-a673-4a70-aa9d-ec34c1db17be" (UID: "676a9025-a673-4a70-aa9d-ec34c1db17be"). InnerVolumeSpecName "kube-api-access-n9fhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.181441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-scripts" (OuterVolumeSpecName: "scripts") pod "676a9025-a673-4a70-aa9d-ec34c1db17be" (UID: "676a9025-a673-4a70-aa9d-ec34c1db17be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.205673 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "676a9025-a673-4a70-aa9d-ec34c1db17be" (UID: "676a9025-a673-4a70-aa9d-ec34c1db17be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.208722 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-config-data" (OuterVolumeSpecName: "config-data") pod "676a9025-a673-4a70-aa9d-ec34c1db17be" (UID: "676a9025-a673-4a70-aa9d-ec34c1db17be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.209064 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f114908-5594-4378-939f-f54b2157d676" (UID: "9f114908-5594-4378-939f-f54b2157d676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.250702 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-config-data" (OuterVolumeSpecName: "config-data") pod "9f114908-5594-4378-939f-f54b2157d676" (UID: "9f114908-5594-4378-939f-f54b2157d676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268031 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268104 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268116 4886 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268124 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268172 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268181 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c7r8\" (UniqueName: \"kubernetes.io/projected/9f114908-5594-4378-939f-f54b2157d676-kube-api-access-6c7r8\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268193 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268201 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9f114908-5594-4378-939f-f54b2157d676-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268210 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676a9025-a673-4a70-aa9d-ec34c1db17be-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.268238 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9fhh\" (UniqueName: \"kubernetes.io/projected/676a9025-a673-4a70-aa9d-ec34c1db17be-kube-api-access-n9fhh\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.301645 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-thqn5" event={"ID":"9f114908-5594-4378-939f-f54b2157d676","Type":"ContainerDied","Data":"fcc8bbf40553cde9c2b386443b55115feca44b41f5cbd715334aa7b1506eef78"} Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.301679 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc8bbf40553cde9c2b386443b55115feca44b41f5cbd715334aa7b1506eef78" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.301753 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-thqn5" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.309974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5c9h" event={"ID":"676a9025-a673-4a70-aa9d-ec34c1db17be","Type":"ContainerDied","Data":"24f822770ac33b496012b10bfe803c315a5cfcfd68498769b1825800fd0da253"} Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.310029 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f822770ac33b496012b10bfe803c315a5cfcfd68498769b1825800fd0da253" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.310071 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5c9h" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.372966 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b5c9h"] Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.385812 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b5c9h"] Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.478129 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p924n"] Jan 29 17:06:13 crc kubenswrapper[4886]: E0129 17:06:13.478910 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f114908-5594-4378-939f-f54b2157d676" containerName="glance-db-sync" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.478950 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f114908-5594-4378-939f-f54b2157d676" containerName="glance-db-sync" Jan 29 17:06:13 crc kubenswrapper[4886]: E0129 17:06:13.478971 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676a9025-a673-4a70-aa9d-ec34c1db17be" containerName="keystone-bootstrap" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.478979 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="676a9025-a673-4a70-aa9d-ec34c1db17be" containerName="keystone-bootstrap" Jan 29 17:06:13 crc kubenswrapper[4886]: E0129 17:06:13.478994 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fca7a19-7db1-4a2e-9f55-d55442cfda87" containerName="init" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.479003 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fca7a19-7db1-4a2e-9f55-d55442cfda87" containerName="init" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.479371 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="676a9025-a673-4a70-aa9d-ec34c1db17be" containerName="keystone-bootstrap" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.480149 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fca7a19-7db1-4a2e-9f55-d55442cfda87" containerName="init" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.480184 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f114908-5594-4378-939f-f54b2157d676" containerName="glance-db-sync" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.481225 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.485172 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.485657 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.485669 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k5qcd" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.485695 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.487340 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.494759 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p924n"] Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.676511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-fernet-keys\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.676645 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq47h\" (UniqueName: \"kubernetes.io/projected/68cdc6ed-ce63-43af-8502-b36cc0ae788a-kube-api-access-cq47h\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.676680 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-credential-keys\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.676737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-config-data\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.676844 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-scripts\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.676880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-combined-ca-bundle\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.779558 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-scripts\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.779648 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-combined-ca-bundle\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.779780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-fernet-keys\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.780110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq47h\" (UniqueName: \"kubernetes.io/projected/68cdc6ed-ce63-43af-8502-b36cc0ae788a-kube-api-access-cq47h\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.780170 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-credential-keys\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.780265 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-config-data\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.785016 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-scripts\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.785054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-combined-ca-bundle\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.785037 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-fernet-keys\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.785174 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-config-data\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.786241 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-credential-keys\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.796734 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq47h\" (UniqueName: \"kubernetes.io/projected/68cdc6ed-ce63-43af-8502-b36cc0ae788a-kube-api-access-cq47h\") pod \"keystone-bootstrap-p924n\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:13 crc kubenswrapper[4886]: I0129 17:06:13.808169 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p924n" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.480769 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-5smww"] Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.481279 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" containerID="cri-o://2b118af4cda69e6639958e45442cbb3e2fb4932b299bff9387ea1c20cb9f4e45" gracePeriod=10 Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.482769 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.532587 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-96hn8"] Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.534366 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.552761 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-96hn8"] Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.614729 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.614789 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-config\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.614870 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.614986 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.615014 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8bwl\" (UniqueName: \"kubernetes.io/projected/80d171a6-11ab-4cdf-b469-acb56ff11735-kube-api-access-t8bwl\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.615072 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.633213 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676a9025-a673-4a70-aa9d-ec34c1db17be" path="/var/lib/kubelet/pods/676a9025-a673-4a70-aa9d-ec34c1db17be/volumes" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.716422 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.716473 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8bwl\" (UniqueName: \"kubernetes.io/projected/80d171a6-11ab-4cdf-b469-acb56ff11735-kube-api-access-t8bwl\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.716541 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.716667 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.716701 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-config\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.716751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.717698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.717730 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.717785 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.718096 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.718478 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-config\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.737450 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8bwl\" (UniqueName: \"kubernetes.io/projected/80d171a6-11ab-4cdf-b469-acb56ff11735-kube-api-access-t8bwl\") pod \"dnsmasq-dns-785d8bcb8c-96hn8\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:14 crc kubenswrapper[4886]: I0129 17:06:14.868768 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.338711 4886 generic.go:334] "Generic (PLEG): container finished" podID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerID="2b118af4cda69e6639958e45442cbb3e2fb4932b299bff9387ea1c20cb9f4e45" exitCode=0 Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.338765 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" event={"ID":"2c74b25a-0daf-4c7e-a023-a7082d8d73cf","Type":"ContainerDied","Data":"2b118af4cda69e6639958e45442cbb3e2fb4932b299bff9387ea1c20cb9f4e45"} Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.394181 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.397684 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.406311 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.409854 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cpfdg" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.432188 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.465537 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.487437 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537283 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-648q7\" (UniqueName: \"kubernetes.io/projected/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-kube-api-access-648q7\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537364 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537395 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537418 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.537563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-logs\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.622449 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.624650 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.628885 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639180 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639205 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639315 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-logs\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.639463 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-648q7\" (UniqueName: \"kubernetes.io/projected/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-kube-api-access-648q7\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.640528 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-logs\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.640540 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.645170 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.645232 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9fc1bf04f61733e1543e4c6d32069c38c610c3d0fa9a349fa6a409f3542d3c50/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.645299 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.646984 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.647918 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.649978 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.674025 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-648q7\" (UniqueName: \"kubernetes.io/projected/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-kube-api-access-648q7\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.695384 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.731755 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4f7\" (UniqueName: \"kubernetes.io/projected/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-kube-api-access-xs4f7\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741589 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741612 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741845 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.741911 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.843895 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.843938 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.843982 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.844048 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.844113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.844150 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.844200 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4f7\" (UniqueName: \"kubernetes.io/projected/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-kube-api-access-xs4f7\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.845003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.845094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.848127 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.848361 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7b71ee9dc20b2cd8e0489051d74fcf4864cc02a892819f8a5785e080087446e/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.849204 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.858218 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.861136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.866259 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4f7\" (UniqueName: \"kubernetes.io/projected/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-kube-api-access-xs4f7\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.938495 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:15 crc kubenswrapper[4886]: I0129 17:06:15.946868 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:16 crc kubenswrapper[4886]: I0129 17:06:16.995070 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:17 crc kubenswrapper[4886]: I0129 17:06:17.069950 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:20 crc kubenswrapper[4886]: I0129 17:06:20.487213 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Jan 29 17:06:25 crc kubenswrapper[4886]: I0129 17:06:25.487124 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Jan 29 17:06:25 crc kubenswrapper[4886]: I0129 17:06:25.488635 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:35 crc kubenswrapper[4886]: I0129 17:06:35.488619 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 29 17:06:40 crc kubenswrapper[4886]: I0129 17:06:40.490386 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 29 17:06:45 crc kubenswrapper[4886]: I0129 17:06:45.491596 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.350073 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.350545 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9v7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-6nmwn_openstack(a0058f32-ae80-4dde-9dce-095c62f45979): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.351769 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-6nmwn" podUID="a0058f32-ae80-4dde-9dce-095c62f45979" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.673634 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-6nmwn" podUID="a0058f32-ae80-4dde-9dce-095c62f45979" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.883197 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.883478 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86p7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-q2dxw_openstack(ffb099fb-7bdb-4969-b3cb-6fc4ef498afd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:06:46 crc kubenswrapper[4886]: E0129 17:06:46.884820 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-q2dxw" podUID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.107991 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.212277 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mjr\" (UniqueName: \"kubernetes.io/projected/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-kube-api-access-88mjr\") pod \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.212512 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-swift-storage-0\") pod \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.212591 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-config\") pod \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.212665 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-sb\") pod \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.212736 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-nb\") pod \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.212809 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-svc\") pod \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\" (UID: \"2c74b25a-0daf-4c7e-a023-a7082d8d73cf\") " Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.216774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-kube-api-access-88mjr" (OuterVolumeSpecName: "kube-api-access-88mjr") pod "2c74b25a-0daf-4c7e-a023-a7082d8d73cf" (UID: "2c74b25a-0daf-4c7e-a023-a7082d8d73cf"). InnerVolumeSpecName "kube-api-access-88mjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.272666 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-config" (OuterVolumeSpecName: "config") pod "2c74b25a-0daf-4c7e-a023-a7082d8d73cf" (UID: "2c74b25a-0daf-4c7e-a023-a7082d8d73cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.275616 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c74b25a-0daf-4c7e-a023-a7082d8d73cf" (UID: "2c74b25a-0daf-4c7e-a023-a7082d8d73cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.283128 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c74b25a-0daf-4c7e-a023-a7082d8d73cf" (UID: "2c74b25a-0daf-4c7e-a023-a7082d8d73cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.284159 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c74b25a-0daf-4c7e-a023-a7082d8d73cf" (UID: "2c74b25a-0daf-4c7e-a023-a7082d8d73cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.288299 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c74b25a-0daf-4c7e-a023-a7082d8d73cf" (UID: "2c74b25a-0daf-4c7e-a023-a7082d8d73cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.316413 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.316487 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.316502 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.316516 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.316530 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.316656 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88mjr\" (UniqueName: \"kubernetes.io/projected/2c74b25a-0daf-4c7e-a023-a7082d8d73cf-kube-api-access-88mjr\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.686087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" event={"ID":"2c74b25a-0daf-4c7e-a023-a7082d8d73cf","Type":"ContainerDied","Data":"02d41ab973396ad0b9067fb7d12dd022b4232ab3e2460c195caa3ce7c6f4e250"} Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.686460 4886 scope.go:117] "RemoveContainer" containerID="2b118af4cda69e6639958e45442cbb3e2fb4932b299bff9387ea1c20cb9f4e45" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.686150 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" Jan 29 17:06:47 crc kubenswrapper[4886]: E0129 17:06:47.689647 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-q2dxw" podUID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.734225 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-5smww"] Jan 29 17:06:47 crc kubenswrapper[4886]: I0129 17:06:47.743856 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-5smww"] Jan 29 17:06:48 crc kubenswrapper[4886]: E0129 17:06:48.613514 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 17:06:48 crc kubenswrapper[4886]: E0129 17:06:48.613880 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rkdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-j5gfz_openstack(04dae116-ceca-4588-9cba-1266bfa92caf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:06:48 crc kubenswrapper[4886]: E0129 17:06:48.616730 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-j5gfz" podUID="04dae116-ceca-4588-9cba-1266bfa92caf" Jan 29 17:06:48 crc kubenswrapper[4886]: I0129 17:06:48.628879 4886 scope.go:117] "RemoveContainer" containerID="cfb7fc79ff5a728a120650052bd3ff240e06f929a54c3a2f5efc1ad8f2dd226b" Jan 29 17:06:48 crc kubenswrapper[4886]: I0129 17:06:48.635149 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" path="/var/lib/kubelet/pods/2c74b25a-0daf-4c7e-a023-a7082d8d73cf/volumes" Jan 29 17:06:48 crc kubenswrapper[4886]: E0129 17:06:48.733315 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-j5gfz" podUID="04dae116-ceca-4588-9cba-1266bfa92caf" Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.202937 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p924n"] Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.276494 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.350315 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-96hn8"] Jan 29 17:06:49 crc kubenswrapper[4886]: W0129 17:06:49.370815 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80d171a6_11ab_4cdf_b469_acb56ff11735.slice/crio-81bf0e642c0dbb7fd724006f0c2c518606f7b43d2584453df92bcfe55b829357 WatchSource:0}: Error finding container 81bf0e642c0dbb7fd724006f0c2c518606f7b43d2584453df92bcfe55b829357: Status 404 returned error can't find the container with id 81bf0e642c0dbb7fd724006f0c2c518606f7b43d2584453df92bcfe55b829357 Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.428088 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:49 crc kubenswrapper[4886]: W0129 17:06:49.434781 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod426bc8f7_73fc_4b57_acd0_7fd8cc26b8a5.slice/crio-d5621121b70db635809d6807b77222d4ab1e04f02615d9fa23d98fc438df1164 WatchSource:0}: Error finding container d5621121b70db635809d6807b77222d4ab1e04f02615d9fa23d98fc438df1164: Status 404 returned error can't find the container with id d5621121b70db635809d6807b77222d4ab1e04f02615d9fa23d98fc438df1164 Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.737437 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2a90939-bcbf-44d8-8ebe-7ab1d118b360","Type":"ContainerStarted","Data":"a06b5cce5a1745b2439afd2d0c3ff6b9f761ea3f97b4ad1a67abe7ae84d84767"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.739953 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p924n" event={"ID":"68cdc6ed-ce63-43af-8502-b36cc0ae788a","Type":"ContainerStarted","Data":"6375ad3e949f813db64562de4e61fa2910abcb717d2e211c509e5dbcb6b07f3a"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.739979 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p924n" event={"ID":"68cdc6ed-ce63-43af-8502-b36cc0ae788a","Type":"ContainerStarted","Data":"76b68b08b92b70f0de4c1a2319c04176b3479b075a2ab3366608b1fce7ae76ee"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.741599 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5","Type":"ContainerStarted","Data":"d5621121b70db635809d6807b77222d4ab1e04f02615d9fa23d98fc438df1164"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.743221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerStarted","Data":"6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.745209 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8m2mm" event={"ID":"8923ac96-087a-425b-a8b4-c09aa4be3d78","Type":"ContainerStarted","Data":"b56f617415d312996740dc4a8697ef643e749e77f4339179492aab6c12f2f0d4"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.753386 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" event={"ID":"80d171a6-11ab-4cdf-b469-acb56ff11735","Type":"ContainerStarted","Data":"26aa10c89bd28f4d17b03fabdd3c3dd7d4b1ab633d533650ee03163b7c656cd5"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.753432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" event={"ID":"80d171a6-11ab-4cdf-b469-acb56ff11735","Type":"ContainerStarted","Data":"81bf0e642c0dbb7fd724006f0c2c518606f7b43d2584453df92bcfe55b829357"} Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.763013 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p924n" podStartSLOduration=36.762965829 podStartE2EDuration="36.762965829s" podCreationTimestamp="2026-01-29 17:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:49.76052606 +0000 UTC m=+2692.669245332" watchObservedRunningTime="2026-01-29 17:06:49.762965829 +0000 UTC m=+2692.671685111" Jan 29 17:06:49 crc kubenswrapper[4886]: I0129 17:06:49.827727 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8m2mm" podStartSLOduration=5.439114842 podStartE2EDuration="45.827705862s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="2026-01-29 17:06:06.563731085 +0000 UTC m=+2649.472450347" lastFinishedPulling="2026-01-29 17:06:46.952322095 +0000 UTC m=+2689.861041367" observedRunningTime="2026-01-29 17:06:49.804968982 +0000 UTC m=+2692.713688254" watchObservedRunningTime="2026-01-29 17:06:49.827705862 +0000 UTC m=+2692.736425134" Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.492660 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-5smww" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.823734 4886 generic.go:334] "Generic (PLEG): container finished" podID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerID="26aa10c89bd28f4d17b03fabdd3c3dd7d4b1ab633d533650ee03163b7c656cd5" exitCode=0 Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.824108 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" event={"ID":"80d171a6-11ab-4cdf-b469-acb56ff11735","Type":"ContainerDied","Data":"26aa10c89bd28f4d17b03fabdd3c3dd7d4b1ab633d533650ee03163b7c656cd5"} Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.824859 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.824929 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" event={"ID":"80d171a6-11ab-4cdf-b469-acb56ff11735","Type":"ContainerStarted","Data":"705da8d91cb45e05b6aa5ab5b116ce8252bf3f498078113a7eee5edc1d206bca"} Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.834739 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2a90939-bcbf-44d8-8ebe-7ab1d118b360","Type":"ContainerStarted","Data":"33ad2a1126eff6cbb88ccc77df323fa1e654c5d2155c0985168da0fd53e1864a"} Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.834898 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-log" containerID="cri-o://33ad2a1126eff6cbb88ccc77df323fa1e654c5d2155c0985168da0fd53e1864a" gracePeriod=30 Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.835128 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-httpd" containerID="cri-o://95a7d3b8a9e32ae8ae2e3ef610040f7131916bc7de34db8cc1af0fec9c3ef960" gracePeriod=30 Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.841061 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5","Type":"ContainerStarted","Data":"fb8fc548f591be6e16630c1c9171e7ca1c4549f03107635ab3d54cf848daec39"} Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.861904 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" podStartSLOduration=36.861888171 podStartE2EDuration="36.861888171s" podCreationTimestamp="2026-01-29 17:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:50.857981721 +0000 UTC m=+2693.766701013" watchObservedRunningTime="2026-01-29 17:06:50.861888171 +0000 UTC m=+2693.770607443" Jan 29 17:06:50 crc kubenswrapper[4886]: I0129 17:06:50.892624 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=36.892594516 podStartE2EDuration="36.892594516s" podCreationTimestamp="2026-01-29 17:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:50.879644341 +0000 UTC m=+2693.788363643" watchObservedRunningTime="2026-01-29 17:06:50.892594516 +0000 UTC m=+2693.801313788" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.867289 4886 generic.go:334] "Generic (PLEG): container finished" podID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerID="95a7d3b8a9e32ae8ae2e3ef610040f7131916bc7de34db8cc1af0fec9c3ef960" exitCode=143 Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.867873 4886 generic.go:334] "Generic (PLEG): container finished" podID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerID="33ad2a1126eff6cbb88ccc77df323fa1e654c5d2155c0985168da0fd53e1864a" exitCode=143 Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.867364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2a90939-bcbf-44d8-8ebe-7ab1d118b360","Type":"ContainerDied","Data":"95a7d3b8a9e32ae8ae2e3ef610040f7131916bc7de34db8cc1af0fec9c3ef960"} Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.867911 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2a90939-bcbf-44d8-8ebe-7ab1d118b360","Type":"ContainerDied","Data":"33ad2a1126eff6cbb88ccc77df323fa1e654c5d2155c0985168da0fd53e1864a"} Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.867923 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2a90939-bcbf-44d8-8ebe-7ab1d118b360","Type":"ContainerDied","Data":"a06b5cce5a1745b2439afd2d0c3ff6b9f761ea3f97b4ad1a67abe7ae84d84767"} Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.867933 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06b5cce5a1745b2439afd2d0c3ff6b9f761ea3f97b4ad1a67abe7ae84d84767" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.916914 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.947575 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-combined-ca-bundle\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.947742 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-config-data\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.947862 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.947917 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-648q7\" (UniqueName: \"kubernetes.io/projected/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-kube-api-access-648q7\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.947952 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-logs\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.948000 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-scripts\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.948171 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-httpd-run\") pod \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\" (UID: \"f2a90939-bcbf-44d8-8ebe-7ab1d118b360\") " Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.948833 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.949277 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-logs" (OuterVolumeSpecName: "logs") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.949883 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.949904 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.954922 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-kube-api-access-648q7" (OuterVolumeSpecName: "kube-api-access-648q7") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "kube-api-access-648q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.955559 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-scripts" (OuterVolumeSpecName: "scripts") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:51 crc kubenswrapper[4886]: I0129 17:06:51.976368 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1" (OuterVolumeSpecName: "glance") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.010173 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.037832 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-config-data" (OuterVolumeSpecName: "config-data") pod "f2a90939-bcbf-44d8-8ebe-7ab1d118b360" (UID: "f2a90939-bcbf-44d8-8ebe-7ab1d118b360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.051346 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.051376 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.051411 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") on node \"crc\" " Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.051422 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-648q7\" (UniqueName: \"kubernetes.io/projected/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-kube-api-access-648q7\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.051434 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a90939-bcbf-44d8-8ebe-7ab1d118b360-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.083180 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.083374 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1") on node "crc" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.153191 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.884746 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5","Type":"ContainerStarted","Data":"794f8e0bf261a512c459ecf62c8c7c26bca5d60128a7b4f23734cabe8f7c898d"} Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.884938 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-log" containerID="cri-o://fb8fc548f591be6e16630c1c9171e7ca1c4549f03107635ab3d54cf848daec39" gracePeriod=30 Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.885026 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-httpd" containerID="cri-o://794f8e0bf261a512c459ecf62c8c7c26bca5d60128a7b4f23734cabe8f7c898d" gracePeriod=30 Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.891151 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerStarted","Data":"fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927"} Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.891194 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.923987 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.9239639 podStartE2EDuration="38.9239639s" podCreationTimestamp="2026-01-29 17:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:06:52.91684983 +0000 UTC m=+2695.825569112" watchObservedRunningTime="2026-01-29 17:06:52.9239639 +0000 UTC m=+2695.832683172" Jan 29 17:06:52 crc kubenswrapper[4886]: I0129 17:06:52.950305 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.982114 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.991396 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:53 crc kubenswrapper[4886]: E0129 17:06:52.991992 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992007 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" Jan 29 17:06:53 crc kubenswrapper[4886]: E0129 17:06:52.992022 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-log" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992031 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-log" Jan 29 17:06:53 crc kubenswrapper[4886]: E0129 17:06:52.992043 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-httpd" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992050 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-httpd" Jan 29 17:06:53 crc kubenswrapper[4886]: E0129 17:06:52.992065 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="init" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992072 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="init" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992284 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-log" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992321 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" containerName="glance-httpd" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.992350 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c74b25a-0daf-4c7e-a023-a7082d8d73cf" containerName="dnsmasq-dns" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.993657 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:52.999693 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.024565 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.026180 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191111 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-config-data\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191166 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-logs\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglvx\" (UniqueName: \"kubernetes.io/projected/849de0d3-3456-44c2-bef4-3a435e4a432a-kube-api-access-fglvx\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191526 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-scripts\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191827 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191942 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.191983 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.293779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-config-data\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.293845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-logs\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.293899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294119 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglvx\" (UniqueName: \"kubernetes.io/projected/849de0d3-3456-44c2-bef4-3a435e4a432a-kube-api-access-fglvx\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-scripts\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294342 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-logs\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294612 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294660 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.294770 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.296798 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.296853 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9fc1bf04f61733e1543e4c6d32069c38c610c3d0fa9a349fa6a409f3542d3c50/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.299452 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-scripts\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.301933 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.305002 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.305828 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-config-data\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.316866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglvx\" (UniqueName: \"kubernetes.io/projected/849de0d3-3456-44c2-bef4-3a435e4a432a-kube-api-access-fglvx\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.346477 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:06:53 crc kubenswrapper[4886]: I0129 17:06:53.438880 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.905126 4886 generic.go:334] "Generic (PLEG): container finished" podID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerID="794f8e0bf261a512c459ecf62c8c7c26bca5d60128a7b4f23734cabe8f7c898d" exitCode=143 Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.905471 4886 generic.go:334] "Generic (PLEG): container finished" podID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerID="fb8fc548f591be6e16630c1c9171e7ca1c4549f03107635ab3d54cf848daec39" exitCode=143 Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.905178 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5","Type":"ContainerDied","Data":"794f8e0bf261a512c459ecf62c8c7c26bca5d60128a7b4f23734cabe8f7c898d"} Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.905520 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5","Type":"ContainerDied","Data":"fb8fc548f591be6e16630c1c9171e7ca1c4549f03107635ab3d54cf848daec39"} Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.905535 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5","Type":"ContainerDied","Data":"d5621121b70db635809d6807b77222d4ab1e04f02615d9fa23d98fc438df1164"} Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.905548 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5621121b70db635809d6807b77222d4ab1e04f02615d9fa23d98fc438df1164" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:53.941504 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111358 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-logs\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111467 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-scripts\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111685 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111727 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs4f7\" (UniqueName: \"kubernetes.io/projected/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-kube-api-access-xs4f7\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111807 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-httpd-run\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111834 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-combined-ca-bundle\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.111971 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-config-data\") pod \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\" (UID: \"426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5\") " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.119728 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-logs" (OuterVolumeSpecName: "logs") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.119929 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.130360 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-scripts" (OuterVolumeSpecName: "scripts") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.130558 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-kube-api-access-xs4f7" (OuterVolumeSpecName: "kube-api-access-xs4f7") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "kube-api-access-xs4f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.196441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.215820 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.215845 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.215853 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs4f7\" (UniqueName: \"kubernetes.io/projected/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-kube-api-access-xs4f7\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.215864 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.215872 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.258598 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-config-data" (OuterVolumeSpecName: "config-data") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.320802 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.376898 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019" (OuterVolumeSpecName: "glance") pod "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" (UID: "426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5"). InnerVolumeSpecName "pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.422462 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") on node \"crc\" " Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.448428 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.448585 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019") on node "crc" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.524743 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.632387 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a90939-bcbf-44d8-8ebe-7ab1d118b360" path="/var/lib/kubelet/pods/f2a90939-bcbf-44d8-8ebe-7ab1d118b360/volumes" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.914113 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.960877 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.981004 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.992957 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:54 crc kubenswrapper[4886]: E0129 17:06:54.993539 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-httpd" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.993554 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-httpd" Jan 29 17:06:54 crc kubenswrapper[4886]: E0129 17:06:54.993587 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-log" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.993596 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-log" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.993840 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-httpd" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.993856 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" containerName="glance-log" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.994956 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.998208 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 17:06:54 crc kubenswrapper[4886]: I0129 17:06:54.998444 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.005535 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.145827 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.145866 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.145971 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-logs\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.145995 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhpzr\" (UniqueName: \"kubernetes.io/projected/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-kube-api-access-fhpzr\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.146016 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.146043 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.146111 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.146305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.203866 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:06:55 crc kubenswrapper[4886]: W0129 17:06:55.213513 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849de0d3_3456_44c2_bef4_3a435e4a432a.slice/crio-6c945ea15f303c81064b58dfa01521088d6d511849d81e35019f4fd66c782c28 WatchSource:0}: Error finding container 6c945ea15f303c81064b58dfa01521088d6d511849d81e35019f4fd66c782c28: Status 404 returned error can't find the container with id 6c945ea15f303c81064b58dfa01521088d6d511849d81e35019f4fd66c782c28 Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248209 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-logs\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhpzr\" (UniqueName: \"kubernetes.io/projected/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-kube-api-access-fhpzr\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248334 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248430 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248519 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.248537 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.249815 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-logs\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.252937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.265753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.265854 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhpzr\" (UniqueName: \"kubernetes.io/projected/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-kube-api-access-fhpzr\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.265885 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.266063 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7b71ee9dc20b2cd8e0489051d74fcf4864cc02a892819f8a5785e080087446e/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.266126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.266275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.269928 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.322710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.622395 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:06:55 crc kubenswrapper[4886]: I0129 17:06:55.935086 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"849de0d3-3456-44c2-bef4-3a435e4a432a","Type":"ContainerStarted","Data":"6c945ea15f303c81064b58dfa01521088d6d511849d81e35019f4fd66c782c28"} Jan 29 17:06:56 crc kubenswrapper[4886]: I0129 17:06:56.545700 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:06:56 crc kubenswrapper[4886]: I0129 17:06:56.646511 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5" path="/var/lib/kubelet/pods/426bc8f7-73fc-4b57-acd0-7fd8cc26b8a5/volumes" Jan 29 17:06:56 crc kubenswrapper[4886]: I0129 17:06:56.949640 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf","Type":"ContainerStarted","Data":"71bc8d6cf1178c38541a40863263406b012b61b297b4f5183d44e11e56405a8a"} Jan 29 17:06:58 crc kubenswrapper[4886]: I0129 17:06:58.992161 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"849de0d3-3456-44c2-bef4-3a435e4a432a","Type":"ContainerStarted","Data":"685691dd71892e3462a49d43e961e4398610edbd2ff6858db714971fb73711e6"} Jan 29 17:06:59 crc kubenswrapper[4886]: I0129 17:06:59.870485 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:06:59 crc kubenswrapper[4886]: I0129 17:06:59.962345 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t8rs7"] Jan 29 17:06:59 crc kubenswrapper[4886]: I0129 17:06:59.962586 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-t8rs7" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="dnsmasq-dns" containerID="cri-o://54bdeb43a338f0b719b206ca212f50bc02c6d2592ec0ac66c6b8743631a3cf1b" gracePeriod=10 Jan 29 17:07:01 crc kubenswrapper[4886]: I0129 17:07:01.017636 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf","Type":"ContainerStarted","Data":"d46a9e5456f252ab3dd8ef0ca224f83e7f91449851fd433a23e9070eb20e028e"} Jan 29 17:07:01 crc kubenswrapper[4886]: I0129 17:07:01.507279 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t8rs7" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Jan 29 17:07:02 crc kubenswrapper[4886]: I0129 17:07:02.031762 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerID="54bdeb43a338f0b719b206ca212f50bc02c6d2592ec0ac66c6b8743631a3cf1b" exitCode=0 Jan 29 17:07:02 crc kubenswrapper[4886]: I0129 17:07:02.032010 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t8rs7" event={"ID":"eb212bbc-3071-4fda-968d-b6d3f19996ee","Type":"ContainerDied","Data":"54bdeb43a338f0b719b206ca212f50bc02c6d2592ec0ac66c6b8743631a3cf1b"} Jan 29 17:07:03 crc kubenswrapper[4886]: I0129 17:07:03.050274 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"849de0d3-3456-44c2-bef4-3a435e4a432a","Type":"ContainerStarted","Data":"5e2f27254ecaeae6872715e18449eaa22b877597c8124da7a49920ec97100c5d"} Jan 29 17:07:04 crc kubenswrapper[4886]: I0129 17:07:04.061643 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf","Type":"ContainerStarted","Data":"819d3c493df902007da456da0899d275e457a2f0ed2e48aedaf84f652820cb61"} Jan 29 17:07:04 crc kubenswrapper[4886]: I0129 17:07:04.098113 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.098092326 podStartE2EDuration="12.098092326s" podCreationTimestamp="2026-01-29 17:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:04.088088954 +0000 UTC m=+2706.996808246" watchObservedRunningTime="2026-01-29 17:07:04.098092326 +0000 UTC m=+2707.006811598" Jan 29 17:07:05 crc kubenswrapper[4886]: I0129 17:07:05.104680 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.104649747 podStartE2EDuration="11.104649747s" podCreationTimestamp="2026-01-29 17:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:05.094750958 +0000 UTC m=+2708.003470260" watchObservedRunningTime="2026-01-29 17:07:05.104649747 +0000 UTC m=+2708.013369039" Jan 29 17:07:05 crc kubenswrapper[4886]: I0129 17:07:05.622800 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:05 crc kubenswrapper[4886]: I0129 17:07:05.622858 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:05 crc kubenswrapper[4886]: I0129 17:07:05.926607 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:05 crc kubenswrapper[4886]: I0129 17:07:05.926770 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:06 crc kubenswrapper[4886]: I0129 17:07:06.084254 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:06 crc kubenswrapper[4886]: I0129 17:07:06.084569 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.375171 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.488612 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-nb\") pod \"eb212bbc-3071-4fda-968d-b6d3f19996ee\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.488920 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czcfr\" (UniqueName: \"kubernetes.io/projected/eb212bbc-3071-4fda-968d-b6d3f19996ee-kube-api-access-czcfr\") pod \"eb212bbc-3071-4fda-968d-b6d3f19996ee\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.489121 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-sb\") pod \"eb212bbc-3071-4fda-968d-b6d3f19996ee\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.489226 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-config\") pod \"eb212bbc-3071-4fda-968d-b6d3f19996ee\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.489292 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-dns-svc\") pod \"eb212bbc-3071-4fda-968d-b6d3f19996ee\" (UID: \"eb212bbc-3071-4fda-968d-b6d3f19996ee\") " Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.500567 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb212bbc-3071-4fda-968d-b6d3f19996ee-kube-api-access-czcfr" (OuterVolumeSpecName: "kube-api-access-czcfr") pod "eb212bbc-3071-4fda-968d-b6d3f19996ee" (UID: "eb212bbc-3071-4fda-968d-b6d3f19996ee"). InnerVolumeSpecName "kube-api-access-czcfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.541816 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-config" (OuterVolumeSpecName: "config") pod "eb212bbc-3071-4fda-968d-b6d3f19996ee" (UID: "eb212bbc-3071-4fda-968d-b6d3f19996ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.545151 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb212bbc-3071-4fda-968d-b6d3f19996ee" (UID: "eb212bbc-3071-4fda-968d-b6d3f19996ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.557868 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb212bbc-3071-4fda-968d-b6d3f19996ee" (UID: "eb212bbc-3071-4fda-968d-b6d3f19996ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.557956 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb212bbc-3071-4fda-968d-b6d3f19996ee" (UID: "eb212bbc-3071-4fda-968d-b6d3f19996ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.592930 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czcfr\" (UniqueName: \"kubernetes.io/projected/eb212bbc-3071-4fda-968d-b6d3f19996ee-kube-api-access-czcfr\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.592968 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.592981 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.592992 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:07 crc kubenswrapper[4886]: I0129 17:07:07.593005 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb212bbc-3071-4fda-968d-b6d3f19996ee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.110427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t8rs7" event={"ID":"eb212bbc-3071-4fda-968d-b6d3f19996ee","Type":"ContainerDied","Data":"da2d61dccf59424cc14b54a614d36ae066f9a9d76b8f120a8702b08ed1b7f949"} Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.110793 4886 scope.go:117] "RemoveContainer" containerID="54bdeb43a338f0b719b206ca212f50bc02c6d2592ec0ac66c6b8743631a3cf1b" Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.110499 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t8rs7" Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.153364 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t8rs7"] Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.166096 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t8rs7"] Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.627908 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" path="/var/lib/kubelet/pods/eb212bbc-3071-4fda-968d-b6d3f19996ee/volumes" Jan 29 17:07:08 crc kubenswrapper[4886]: I0129 17:07:08.661225 4886 scope.go:117] "RemoveContainer" containerID="71b921e8db9e8e747c69aeafc44470b62e0400a32e8c7e760d1d991c175cbc64" Jan 29 17:07:11 crc kubenswrapper[4886]: I0129 17:07:11.507857 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t8rs7" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Jan 29 17:07:12 crc kubenswrapper[4886]: I0129 17:07:12.166143 4886 generic.go:334] "Generic (PLEG): container finished" podID="68cdc6ed-ce63-43af-8502-b36cc0ae788a" containerID="6375ad3e949f813db64562de4e61fa2910abcb717d2e211c509e5dbcb6b07f3a" exitCode=0 Jan 29 17:07:12 crc kubenswrapper[4886]: I0129 17:07:12.166194 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p924n" event={"ID":"68cdc6ed-ce63-43af-8502-b36cc0ae788a","Type":"ContainerDied","Data":"6375ad3e949f813db64562de4e61fa2910abcb717d2e211c509e5dbcb6b07f3a"} Jan 29 17:07:13 crc kubenswrapper[4886]: I0129 17:07:13.439786 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 17:07:13 crc kubenswrapper[4886]: I0129 17:07:13.440133 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 17:07:13 crc kubenswrapper[4886]: I0129 17:07:13.593258 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 17:07:13 crc kubenswrapper[4886]: I0129 17:07:13.596464 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.188743 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.188788 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.890201 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p924n" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.965111 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-config-data\") pod \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.965994 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq47h\" (UniqueName: \"kubernetes.io/projected/68cdc6ed-ce63-43af-8502-b36cc0ae788a-kube-api-access-cq47h\") pod \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.966064 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-combined-ca-bundle\") pod \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.966153 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-credential-keys\") pod \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.966229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-fernet-keys\") pod \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.966287 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-scripts\") pod \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\" (UID: \"68cdc6ed-ce63-43af-8502-b36cc0ae788a\") " Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.970575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-scripts" (OuterVolumeSpecName: "scripts") pod "68cdc6ed-ce63-43af-8502-b36cc0ae788a" (UID: "68cdc6ed-ce63-43af-8502-b36cc0ae788a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.971579 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "68cdc6ed-ce63-43af-8502-b36cc0ae788a" (UID: "68cdc6ed-ce63-43af-8502-b36cc0ae788a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.971609 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "68cdc6ed-ce63-43af-8502-b36cc0ae788a" (UID: "68cdc6ed-ce63-43af-8502-b36cc0ae788a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:14 crc kubenswrapper[4886]: I0129 17:07:14.972879 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cdc6ed-ce63-43af-8502-b36cc0ae788a-kube-api-access-cq47h" (OuterVolumeSpecName: "kube-api-access-cq47h") pod "68cdc6ed-ce63-43af-8502-b36cc0ae788a" (UID: "68cdc6ed-ce63-43af-8502-b36cc0ae788a"). InnerVolumeSpecName "kube-api-access-cq47h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.047671 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-config-data" (OuterVolumeSpecName: "config-data") pod "68cdc6ed-ce63-43af-8502-b36cc0ae788a" (UID: "68cdc6ed-ce63-43af-8502-b36cc0ae788a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.047762 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68cdc6ed-ce63-43af-8502-b36cc0ae788a" (UID: "68cdc6ed-ce63-43af-8502-b36cc0ae788a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.069022 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.069051 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.069059 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.069068 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq47h\" (UniqueName: \"kubernetes.io/projected/68cdc6ed-ce63-43af-8502-b36cc0ae788a-kube-api-access-cq47h\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.069079 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.069088 4886 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68cdc6ed-ce63-43af-8502-b36cc0ae788a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.227550 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q2dxw" event={"ID":"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd","Type":"ContainerStarted","Data":"462d0b69d42ff5bdae3194985f827b482bb0c2607dbc772e35d27e51d1171c94"} Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.234668 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerStarted","Data":"2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf"} Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.236260 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6nmwn" event={"ID":"a0058f32-ae80-4dde-9dce-095c62f45979","Type":"ContainerStarted","Data":"ab83d2d0c36aaea48832e86668e20e1d6f6f876644014c27f52bee83b6960b7d"} Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.238617 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p924n" event={"ID":"68cdc6ed-ce63-43af-8502-b36cc0ae788a","Type":"ContainerDied","Data":"76b68b08b92b70f0de4c1a2319c04176b3479b075a2ab3366608b1fce7ae76ee"} Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.238645 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b68b08b92b70f0de4c1a2319c04176b3479b075a2ab3366608b1fce7ae76ee" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.238711 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p924n" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.249169 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-q2dxw" podStartSLOduration=3.735655083 podStartE2EDuration="1m11.249151684s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="2026-01-29 17:06:07.074218153 +0000 UTC m=+2649.982937415" lastFinishedPulling="2026-01-29 17:07:14.587714744 +0000 UTC m=+2717.496434016" observedRunningTime="2026-01-29 17:07:15.24581 +0000 UTC m=+2718.154529292" watchObservedRunningTime="2026-01-29 17:07:15.249151684 +0000 UTC m=+2718.157870956" Jan 29 17:07:15 crc kubenswrapper[4886]: I0129 17:07:15.275509 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-6nmwn" podStartSLOduration=2.780281854 podStartE2EDuration="1m11.275492216s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="2026-01-29 17:06:06.093981444 +0000 UTC m=+2649.002700726" lastFinishedPulling="2026-01-29 17:07:14.589191816 +0000 UTC m=+2717.497911088" observedRunningTime="2026-01-29 17:07:15.261859182 +0000 UTC m=+2718.170578474" watchObservedRunningTime="2026-01-29 17:07:15.275492216 +0000 UTC m=+2718.184211488" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.015604 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5499bdc9-q6hr4"] Jan 29 17:07:16 crc kubenswrapper[4886]: E0129 17:07:16.016537 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="init" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.016550 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="init" Jan 29 17:07:16 crc kubenswrapper[4886]: E0129 17:07:16.016621 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cdc6ed-ce63-43af-8502-b36cc0ae788a" containerName="keystone-bootstrap" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.016628 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cdc6ed-ce63-43af-8502-b36cc0ae788a" containerName="keystone-bootstrap" Jan 29 17:07:16 crc kubenswrapper[4886]: E0129 17:07:16.016642 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="dnsmasq-dns" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.016648 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="dnsmasq-dns" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.016832 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb212bbc-3071-4fda-968d-b6d3f19996ee" containerName="dnsmasq-dns" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.016847 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cdc6ed-ce63-43af-8502-b36cc0ae788a" containerName="keystone-bootstrap" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.018174 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.020930 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.021195 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-k5qcd" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.021196 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.021513 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.024718 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.027299 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.044205 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5499bdc9-q6hr4"] Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.091725 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-credential-keys\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.091853 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-scripts\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.091910 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5dx\" (UniqueName: \"kubernetes.io/projected/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-kube-api-access-vf5dx\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.091963 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-combined-ca-bundle\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.092016 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-public-tls-certs\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.092030 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-config-data\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.092066 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-internal-tls-certs\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.092134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-fernet-keys\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194083 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-scripts\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5dx\" (UniqueName: \"kubernetes.io/projected/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-kube-api-access-vf5dx\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-combined-ca-bundle\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194346 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-public-tls-certs\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-config-data\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-internal-tls-certs\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-fernet-keys\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.194562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-credential-keys\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.200154 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-credential-keys\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.200274 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-fernet-keys\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.201456 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-public-tls-certs\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.201585 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-combined-ca-bundle\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.202573 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-scripts\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.209401 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-internal-tls-certs\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.214121 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-config-data\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.242670 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5dx\" (UniqueName: \"kubernetes.io/projected/d9e327b0-6e20-4b1d-a18f-64b8b49ef36d-kube-api-access-vf5dx\") pod \"keystone-5499bdc9-q6hr4\" (UID: \"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d\") " pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.250246 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j5gfz" event={"ID":"04dae116-ceca-4588-9cba-1266bfa92caf","Type":"ContainerStarted","Data":"09a30c5dfcb3deacf09e3ccec1c515a8213db072a4cbe06ac44ba60b9a7d0159"} Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.252208 4886 generic.go:334] "Generic (PLEG): container finished" podID="8923ac96-087a-425b-a8b4-c09aa4be3d78" containerID="b56f617415d312996740dc4a8697ef643e749e77f4339179492aab6c12f2f0d4" exitCode=0 Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.252247 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8m2mm" event={"ID":"8923ac96-087a-425b-a8b4-c09aa4be3d78","Type":"ContainerDied","Data":"b56f617415d312996740dc4a8697ef643e749e77f4339179492aab6c12f2f0d4"} Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.274727 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-j5gfz" podStartSLOduration=3.998321662 podStartE2EDuration="1m12.27470308s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="2026-01-29 17:06:06.313123617 +0000 UTC m=+2649.221842879" lastFinishedPulling="2026-01-29 17:07:14.589505035 +0000 UTC m=+2717.498224297" observedRunningTime="2026-01-29 17:07:16.271421087 +0000 UTC m=+2719.180140359" watchObservedRunningTime="2026-01-29 17:07:16.27470308 +0000 UTC m=+2719.183422352" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.336392 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.863146 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5499bdc9-q6hr4"] Jan 29 17:07:16 crc kubenswrapper[4886]: W0129 17:07:16.902041 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9e327b0_6e20_4b1d_a18f_64b8b49ef36d.slice/crio-9228f11c2df4be09dbc3fbbbdbf63e80d8c682804d34491222b93f145af49788 WatchSource:0}: Error finding container 9228f11c2df4be09dbc3fbbbdbf63e80d8c682804d34491222b93f145af49788: Status 404 returned error can't find the container with id 9228f11c2df4be09dbc3fbbbdbf63e80d8c682804d34491222b93f145af49788 Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.907343 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.918896 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 17:07:16 crc kubenswrapper[4886]: I0129 17:07:16.918942 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.267526 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5499bdc9-q6hr4" event={"ID":"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d","Type":"ContainerStarted","Data":"8fb0484c6a214f05410ef82efa17abe7d106d7d860627a7ea48d168639c2ad83"} Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.268853 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.268941 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5499bdc9-q6hr4" event={"ID":"d9e327b0-6e20-4b1d-a18f-64b8b49ef36d","Type":"ContainerStarted","Data":"9228f11c2df4be09dbc3fbbbdbf63e80d8c682804d34491222b93f145af49788"} Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.286937 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5499bdc9-q6hr4" podStartSLOduration=2.286915719 podStartE2EDuration="2.286915719s" podCreationTimestamp="2026-01-29 17:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:17.284819 +0000 UTC m=+2720.193538282" watchObservedRunningTime="2026-01-29 17:07:17.286915719 +0000 UTC m=+2720.195634991" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.668488 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8m2mm" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.851052 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ckms\" (UniqueName: \"kubernetes.io/projected/8923ac96-087a-425b-a8b4-c09aa4be3d78-kube-api-access-8ckms\") pod \"8923ac96-087a-425b-a8b4-c09aa4be3d78\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.851481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-config-data\") pod \"8923ac96-087a-425b-a8b4-c09aa4be3d78\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.851699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-combined-ca-bundle\") pod \"8923ac96-087a-425b-a8b4-c09aa4be3d78\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.851887 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-scripts\") pod \"8923ac96-087a-425b-a8b4-c09aa4be3d78\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.852054 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8923ac96-087a-425b-a8b4-c09aa4be3d78-logs\") pod \"8923ac96-087a-425b-a8b4-c09aa4be3d78\" (UID: \"8923ac96-087a-425b-a8b4-c09aa4be3d78\") " Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.852997 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8923ac96-087a-425b-a8b4-c09aa4be3d78-logs" (OuterVolumeSpecName: "logs") pod "8923ac96-087a-425b-a8b4-c09aa4be3d78" (UID: "8923ac96-087a-425b-a8b4-c09aa4be3d78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.859242 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-scripts" (OuterVolumeSpecName: "scripts") pod "8923ac96-087a-425b-a8b4-c09aa4be3d78" (UID: "8923ac96-087a-425b-a8b4-c09aa4be3d78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.860530 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8923ac96-087a-425b-a8b4-c09aa4be3d78-kube-api-access-8ckms" (OuterVolumeSpecName: "kube-api-access-8ckms") pod "8923ac96-087a-425b-a8b4-c09aa4be3d78" (UID: "8923ac96-087a-425b-a8b4-c09aa4be3d78"). InnerVolumeSpecName "kube-api-access-8ckms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.880634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8923ac96-087a-425b-a8b4-c09aa4be3d78" (UID: "8923ac96-087a-425b-a8b4-c09aa4be3d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.904381 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-config-data" (OuterVolumeSpecName: "config-data") pod "8923ac96-087a-425b-a8b4-c09aa4be3d78" (UID: "8923ac96-087a-425b-a8b4-c09aa4be3d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.955027 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.955071 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.955084 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8923ac96-087a-425b-a8b4-c09aa4be3d78-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.955096 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ckms\" (UniqueName: \"kubernetes.io/projected/8923ac96-087a-425b-a8b4-c09aa4be3d78-kube-api-access-8ckms\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:17 crc kubenswrapper[4886]: I0129 17:07:17.955110 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8923ac96-087a-425b-a8b4-c09aa4be3d78-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.281554 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8m2mm" event={"ID":"8923ac96-087a-425b-a8b4-c09aa4be3d78","Type":"ContainerDied","Data":"7ba3dd51612ec84b7435debfb27c88330b100c1320a10e3e0bea0e482e076cd8"} Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.281603 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8m2mm" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.281739 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba3dd51612ec84b7435debfb27c88330b100c1320a10e3e0bea0e482e076cd8" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.431194 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-795d8c76d8-x2zqv"] Jan 29 17:07:18 crc kubenswrapper[4886]: E0129 17:07:18.431633 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8923ac96-087a-425b-a8b4-c09aa4be3d78" containerName="placement-db-sync" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.431650 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8923ac96-087a-425b-a8b4-c09aa4be3d78" containerName="placement-db-sync" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.431875 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8923ac96-087a-425b-a8b4-c09aa4be3d78" containerName="placement-db-sync" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.436879 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.446312 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.446596 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mrvvt" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.446707 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.446867 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.447437 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.458364 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-795d8c76d8-x2zqv"] Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.574749 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-combined-ca-bundle\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.574803 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-public-tls-certs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.574856 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-config-data\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.574915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e13d48e-3469-4f76-8bae-ab1a21556f5a-logs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.574957 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-internal-tls-certs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.574998 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv2wf\" (UniqueName: \"kubernetes.io/projected/7e13d48e-3469-4f76-8bae-ab1a21556f5a-kube-api-access-jv2wf\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.575093 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-scripts\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676493 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-combined-ca-bundle\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-public-tls-certs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676587 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-config-data\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676623 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e13d48e-3469-4f76-8bae-ab1a21556f5a-logs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-internal-tls-certs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676683 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv2wf\" (UniqueName: \"kubernetes.io/projected/7e13d48e-3469-4f76-8bae-ab1a21556f5a-kube-api-access-jv2wf\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.676751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-scripts\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.677539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e13d48e-3469-4f76-8bae-ab1a21556f5a-logs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.689926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-internal-tls-certs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.691633 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-config-data\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.692413 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-public-tls-certs\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.693157 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-combined-ca-bundle\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.700705 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e13d48e-3469-4f76-8bae-ab1a21556f5a-scripts\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.708101 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv2wf\" (UniqueName: \"kubernetes.io/projected/7e13d48e-3469-4f76-8bae-ab1a21556f5a-kube-api-access-jv2wf\") pod \"placement-795d8c76d8-x2zqv\" (UID: \"7e13d48e-3469-4f76-8bae-ab1a21556f5a\") " pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:18 crc kubenswrapper[4886]: I0129 17:07:18.799596 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:19 crc kubenswrapper[4886]: I0129 17:07:19.369399 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-795d8c76d8-x2zqv"] Jan 29 17:07:19 crc kubenswrapper[4886]: W0129 17:07:19.374518 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e13d48e_3469_4f76_8bae_ab1a21556f5a.slice/crio-02ef7ab85d551b8c7255372e2df6940c041d74302e7d6145a578475e935f0fc2 WatchSource:0}: Error finding container 02ef7ab85d551b8c7255372e2df6940c041d74302e7d6145a578475e935f0fc2: Status 404 returned error can't find the container with id 02ef7ab85d551b8c7255372e2df6940c041d74302e7d6145a578475e935f0fc2 Jan 29 17:07:19 crc kubenswrapper[4886]: I0129 17:07:19.668896 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:20 crc kubenswrapper[4886]: I0129 17:07:20.309949 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-795d8c76d8-x2zqv" event={"ID":"7e13d48e-3469-4f76-8bae-ab1a21556f5a","Type":"ContainerStarted","Data":"58e68e11ea532ee03604b1a3e5d94c5d1b6fff5c393f020ac0dcc0a7eb5b76a9"} Jan 29 17:07:20 crc kubenswrapper[4886]: I0129 17:07:20.310342 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-795d8c76d8-x2zqv" event={"ID":"7e13d48e-3469-4f76-8bae-ab1a21556f5a","Type":"ContainerStarted","Data":"65b3a2de2f2bfa8b452044b25f0f46c9c8d2cf5077cb7b5fb82f688d7f51c24d"} Jan 29 17:07:20 crc kubenswrapper[4886]: I0129 17:07:20.310363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-795d8c76d8-x2zqv" event={"ID":"7e13d48e-3469-4f76-8bae-ab1a21556f5a","Type":"ContainerStarted","Data":"02ef7ab85d551b8c7255372e2df6940c041d74302e7d6145a578475e935f0fc2"} Jan 29 17:07:20 crc kubenswrapper[4886]: I0129 17:07:20.310406 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:20 crc kubenswrapper[4886]: I0129 17:07:20.310434 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:20 crc kubenswrapper[4886]: I0129 17:07:20.338015 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-795d8c76d8-x2zqv" podStartSLOduration=2.337994294 podStartE2EDuration="2.337994294s" podCreationTimestamp="2026-01-29 17:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:20.332447608 +0000 UTC m=+2723.241166880" watchObservedRunningTime="2026-01-29 17:07:20.337994294 +0000 UTC m=+2723.246713556" Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.418123 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerStarted","Data":"6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55"} Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.418729 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.418439 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="sg-core" containerID="cri-o://2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf" gracePeriod=30 Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.418367 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-central-agent" containerID="cri-o://6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4" gracePeriod=30 Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.418496 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-notification-agent" containerID="cri-o://fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927" gracePeriod=30 Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.418523 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="proxy-httpd" containerID="cri-o://6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55" gracePeriod=30 Jan 29 17:07:29 crc kubenswrapper[4886]: I0129 17:07:29.454070 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.666899876 podStartE2EDuration="1m25.454051854s" podCreationTimestamp="2026-01-29 17:06:04 +0000 UTC" firstStartedPulling="2026-01-29 17:06:07.088088704 +0000 UTC m=+2649.996807966" lastFinishedPulling="2026-01-29 17:07:28.875240682 +0000 UTC m=+2731.783959944" observedRunningTime="2026-01-29 17:07:29.451936645 +0000 UTC m=+2732.360655917" watchObservedRunningTime="2026-01-29 17:07:29.454051854 +0000 UTC m=+2732.362771146" Jan 29 17:07:30 crc kubenswrapper[4886]: I0129 17:07:30.432735 4886 generic.go:334] "Generic (PLEG): container finished" podID="87986c31-37d7-4624-87a2-b5678e01d865" containerID="6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55" exitCode=0 Jan 29 17:07:30 crc kubenswrapper[4886]: I0129 17:07:30.433130 4886 generic.go:334] "Generic (PLEG): container finished" podID="87986c31-37d7-4624-87a2-b5678e01d865" containerID="2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf" exitCode=2 Jan 29 17:07:30 crc kubenswrapper[4886]: I0129 17:07:30.433152 4886 generic.go:334] "Generic (PLEG): container finished" podID="87986c31-37d7-4624-87a2-b5678e01d865" containerID="6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4" exitCode=0 Jan 29 17:07:30 crc kubenswrapper[4886]: I0129 17:07:30.432825 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerDied","Data":"6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55"} Jan 29 17:07:30 crc kubenswrapper[4886]: I0129 17:07:30.433200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerDied","Data":"2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf"} Jan 29 17:07:30 crc kubenswrapper[4886]: I0129 17:07:30.433229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerDied","Data":"6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4"} Jan 29 17:07:32 crc kubenswrapper[4886]: I0129 17:07:32.457532 4886 generic.go:334] "Generic (PLEG): container finished" podID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" containerID="462d0b69d42ff5bdae3194985f827b482bb0c2607dbc772e35d27e51d1171c94" exitCode=0 Jan 29 17:07:32 crc kubenswrapper[4886]: I0129 17:07:32.457622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q2dxw" event={"ID":"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd","Type":"ContainerDied","Data":"462d0b69d42ff5bdae3194985f827b482bb0c2607dbc772e35d27e51d1171c94"} Jan 29 17:07:33 crc kubenswrapper[4886]: I0129 17:07:33.941372 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.037598 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86p7n\" (UniqueName: \"kubernetes.io/projected/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-kube-api-access-86p7n\") pod \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.037855 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-db-sync-config-data\") pod \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.037932 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-combined-ca-bundle\") pod \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\" (UID: \"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd\") " Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.054921 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" (UID: "ffb099fb-7bdb-4969-b3cb-6fc4ef498afd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.055001 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-kube-api-access-86p7n" (OuterVolumeSpecName: "kube-api-access-86p7n") pod "ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" (UID: "ffb099fb-7bdb-4969-b3cb-6fc4ef498afd"). InnerVolumeSpecName "kube-api-access-86p7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.070861 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" (UID: "ffb099fb-7bdb-4969-b3cb-6fc4ef498afd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.140034 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.140073 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.140088 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86p7n\" (UniqueName: \"kubernetes.io/projected/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd-kube-api-access-86p7n\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.486983 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q2dxw" event={"ID":"ffb099fb-7bdb-4969-b3cb-6fc4ef498afd","Type":"ContainerDied","Data":"474a2d0d1c07609e70e6ff2d358c4e7ec5598344e910e4e2e3ec3d713255b48d"} Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.487022 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474a2d0d1c07609e70e6ff2d358c4e7ec5598344e910e4e2e3ec3d713255b48d" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.487638 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q2dxw" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.877726 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f4657cb95-4tfvc"] Jan 29 17:07:34 crc kubenswrapper[4886]: E0129 17:07:34.878499 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" containerName="barbican-db-sync" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.878520 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" containerName="barbican-db-sync" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.878735 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" containerName="barbican-db-sync" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.879840 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.883555 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.883725 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.883850 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5k8bj" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.892637 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-85cc5d579d-jhqqd"] Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.894349 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.898661 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.921761 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f4657cb95-4tfvc"] Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.957815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-config-data-custom\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.957867 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-config-data\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.957951 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-combined-ca-bundle\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958068 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-config-data-custom\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958115 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-config-data\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958203 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/054e527c-8ce1-4d03-8fef-0430934daba3-logs\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958298 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcrxr\" (UniqueName: \"kubernetes.io/projected/8f83894a-73ec-405a-bdd2-2044b3f9140a-kube-api-access-rcrxr\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958384 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-combined-ca-bundle\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958566 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqnr\" (UniqueName: \"kubernetes.io/projected/054e527c-8ce1-4d03-8fef-0430934daba3-kube-api-access-xsqnr\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.958592 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f83894a-73ec-405a-bdd2-2044b3f9140a-logs\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.966405 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85cc5d579d-jhqqd"] Jan 29 17:07:34 crc kubenswrapper[4886]: I0129 17:07:34.999154 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-jsg5q"] Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.001298 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.004533 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-jsg5q"] Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.063774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/054e527c-8ce1-4d03-8fef-0430934daba3-logs\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcrxr\" (UniqueName: \"kubernetes.io/projected/8f83894a-73ec-405a-bdd2-2044b3f9140a-kube-api-access-rcrxr\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064092 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-combined-ca-bundle\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064216 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsqnr\" (UniqueName: \"kubernetes.io/projected/054e527c-8ce1-4d03-8fef-0430934daba3-kube-api-access-xsqnr\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f83894a-73ec-405a-bdd2-2044b3f9140a-logs\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064404 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064500 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-config-data-custom\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064577 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-config-data\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-config\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.067435 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.067669 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-combined-ca-bundle\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.067773 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.067964 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-config-data-custom\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.068086 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-config-data\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.068172 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzbl\" (UniqueName: \"kubernetes.io/projected/9ac97bdb-475a-4061-96b0-1423be10bb5b-kube-api-access-tzzbl\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.068247 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.076762 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-config-data\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.064337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/054e527c-8ce1-4d03-8fef-0430934daba3-logs\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.065311 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f83894a-73ec-405a-bdd2-2044b3f9140a-logs\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.079584 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-combined-ca-bundle\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.087008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-config-data-custom\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.092687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-combined-ca-bundle\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.094184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f83894a-73ec-405a-bdd2-2044b3f9140a-config-data\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.095718 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcrxr\" (UniqueName: \"kubernetes.io/projected/8f83894a-73ec-405a-bdd2-2044b3f9140a-kube-api-access-rcrxr\") pod \"barbican-worker-f4657cb95-4tfvc\" (UID: \"8f83894a-73ec-405a-bdd2-2044b3f9140a\") " pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.098566 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsqnr\" (UniqueName: \"kubernetes.io/projected/054e527c-8ce1-4d03-8fef-0430934daba3-kube-api-access-xsqnr\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.114785 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/054e527c-8ce1-4d03-8fef-0430934daba3-config-data-custom\") pod \"barbican-keystone-listener-85cc5d579d-jhqqd\" (UID: \"054e527c-8ce1-4d03-8fef-0430934daba3\") " pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.119829 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55f7ff7dd6-jj4jw"] Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.121476 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.127781 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.141611 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f7ff7dd6-jj4jw"] Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171133 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-combined-ca-bundle\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171195 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzzbl\" (UniqueName: \"kubernetes.io/projected/9ac97bdb-475a-4061-96b0-1423be10bb5b-kube-api-access-tzzbl\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171443 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171557 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-config\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171641 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171675 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25kn5\" (UniqueName: \"kubernetes.io/projected/ea36feff-2438-49e4-b779-0b083addd0a8-kube-api-access-25kn5\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171695 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data-custom\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.171730 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea36feff-2438-49e4-b779-0b083addd0a8-logs\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.172441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.172478 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.172478 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-config\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.172747 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.173145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.188615 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzzbl\" (UniqueName: \"kubernetes.io/projected/9ac97bdb-475a-4061-96b0-1423be10bb5b-kube-api-access-tzzbl\") pod \"dnsmasq-dns-586bdc5f9-jsg5q\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.219749 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f4657cb95-4tfvc" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.228613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.273574 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.273725 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25kn5\" (UniqueName: \"kubernetes.io/projected/ea36feff-2438-49e4-b779-0b083addd0a8-kube-api-access-25kn5\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.273773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data-custom\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.273827 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea36feff-2438-49e4-b779-0b083addd0a8-logs\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.273875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-combined-ca-bundle\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.277930 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-combined-ca-bundle\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.281176 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.283928 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data-custom\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.285020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea36feff-2438-49e4-b779-0b083addd0a8-logs\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.313550 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25kn5\" (UniqueName: \"kubernetes.io/projected/ea36feff-2438-49e4-b779-0b083addd0a8-kube-api-access-25kn5\") pod \"barbican-api-55f7ff7dd6-jj4jw\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.333514 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.383498 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:35 crc kubenswrapper[4886]: I0129 17:07:35.863769 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f4657cb95-4tfvc"] Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.026830 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-jsg5q"] Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.039573 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85cc5d579d-jhqqd"] Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.243387 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f7ff7dd6-jj4jw"] Jan 29 17:07:36 crc kubenswrapper[4886]: W0129 17:07:36.302360 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea36feff_2438_49e4_b779_0b083addd0a8.slice/crio-e9dafe9a7a14455f6d6567489f608749fce9a0af4812468a1f99388ab4f30929 WatchSource:0}: Error finding container e9dafe9a7a14455f6d6567489f608749fce9a0af4812468a1f99388ab4f30929: Status 404 returned error can't find the container with id e9dafe9a7a14455f6d6567489f608749fce9a0af4812468a1f99388ab4f30929 Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.533028 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.595963 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f4657cb95-4tfvc" event={"ID":"8f83894a-73ec-405a-bdd2-2044b3f9140a","Type":"ContainerStarted","Data":"c6e79ae953c0f36ce267680773fd96b75453c6a1745545d5e448c84519c6cdae"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.604025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" event={"ID":"054e527c-8ce1-4d03-8fef-0430934daba3","Type":"ContainerStarted","Data":"e62654d928fcfc926c64b76e5a652ed2c3fb2b029b9bd38eabebb8f8d2e377c1"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.606022 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" event={"ID":"9ac97bdb-475a-4061-96b0-1423be10bb5b","Type":"ContainerStarted","Data":"1724f7bc6805ebdf2ea8515900b97a42430de51ca57fd28deec62f818f0909c2"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.607754 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" event={"ID":"ea36feff-2438-49e4-b779-0b083addd0a8","Type":"ContainerStarted","Data":"e9dafe9a7a14455f6d6567489f608749fce9a0af4812468a1f99388ab4f30929"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.609943 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-config-data\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.610085 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-scripts\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.610143 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-sg-core-conf-yaml\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.610203 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-run-httpd\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.610284 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-log-httpd\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.610399 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4459b\" (UniqueName: \"kubernetes.io/projected/87986c31-37d7-4624-87a2-b5678e01d865-kube-api-access-4459b\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.610421 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.613271 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.613503 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.614039 4886 generic.go:334] "Generic (PLEG): container finished" podID="87986c31-37d7-4624-87a2-b5678e01d865" containerID="fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927" exitCode=0 Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.614949 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.632534 4886 generic.go:334] "Generic (PLEG): container finished" podID="a0058f32-ae80-4dde-9dce-095c62f45979" containerID="ab83d2d0c36aaea48832e86668e20e1d6f6f876644014c27f52bee83b6960b7d" exitCode=0 Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.644082 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-scripts" (OuterVolumeSpecName: "scripts") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.644146 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87986c31-37d7-4624-87a2-b5678e01d865-kube-api-access-4459b" (OuterVolumeSpecName: "kube-api-access-4459b") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "kube-api-access-4459b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.668617 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerDied","Data":"fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.668739 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87986c31-37d7-4624-87a2-b5678e01d865","Type":"ContainerDied","Data":"3e6ce925c7e7561fcefff1c9869e186415899419d2d1d24db82a0097aea34d23"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.668755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6nmwn" event={"ID":"a0058f32-ae80-4dde-9dce-095c62f45979","Type":"ContainerDied","Data":"ab83d2d0c36aaea48832e86668e20e1d6f6f876644014c27f52bee83b6960b7d"} Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.668779 4886 scope.go:117] "RemoveContainer" containerID="6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.689846 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.712694 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.712718 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87986c31-37d7-4624-87a2-b5678e01d865-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.712728 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4459b\" (UniqueName: \"kubernetes.io/projected/87986c31-37d7-4624-87a2-b5678e01d865-kube-api-access-4459b\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.712737 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.712745 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.751163 4886 scope.go:117] "RemoveContainer" containerID="2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf" Jan 29 17:07:36 crc kubenswrapper[4886]: E0129 17:07:36.764812 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle podName:87986c31-37d7-4624-87a2-b5678e01d865 nodeName:}" failed. No retries permitted until 2026-01-29 17:07:37.264786205 +0000 UTC m=+2740.173505477 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865") : error deleting /var/lib/kubelet/pods/87986c31-37d7-4624-87a2-b5678e01d865/volume-subpaths: remove /var/lib/kubelet/pods/87986c31-37d7-4624-87a2-b5678e01d865/volume-subpaths: no such file or directory Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.767794 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-config-data" (OuterVolumeSpecName: "config-data") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.773559 4886 scope.go:117] "RemoveContainer" containerID="fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.801877 4886 scope.go:117] "RemoveContainer" containerID="6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.815737 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.828260 4886 scope.go:117] "RemoveContainer" containerID="6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55" Jan 29 17:07:36 crc kubenswrapper[4886]: E0129 17:07:36.829526 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55\": container with ID starting with 6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55 not found: ID does not exist" containerID="6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.829590 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55"} err="failed to get container status \"6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55\": rpc error: code = NotFound desc = could not find container \"6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55\": container with ID starting with 6996141f6a6ddf86f1830cd32cfa7315a6d22f9c619ba74af481f02099316d55 not found: ID does not exist" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.829655 4886 scope.go:117] "RemoveContainer" containerID="2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf" Jan 29 17:07:36 crc kubenswrapper[4886]: E0129 17:07:36.830044 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf\": container with ID starting with 2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf not found: ID does not exist" containerID="2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.830071 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf"} err="failed to get container status \"2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf\": rpc error: code = NotFound desc = could not find container \"2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf\": container with ID starting with 2af8246b154ee39fedcfdd8e1579a14d1154c4bc23cb6682bb1d0354640c6bcf not found: ID does not exist" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.830093 4886 scope.go:117] "RemoveContainer" containerID="fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927" Jan 29 17:07:36 crc kubenswrapper[4886]: E0129 17:07:36.833312 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927\": container with ID starting with fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927 not found: ID does not exist" containerID="fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.833385 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927"} err="failed to get container status \"fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927\": rpc error: code = NotFound desc = could not find container \"fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927\": container with ID starting with fc4b86cf717b23c7c04aaa4106c7da0d6d9a36f8580e8da13099630ec38cb927 not found: ID does not exist" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.833416 4886 scope.go:117] "RemoveContainer" containerID="6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4" Jan 29 17:07:36 crc kubenswrapper[4886]: E0129 17:07:36.835885 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4\": container with ID starting with 6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4 not found: ID does not exist" containerID="6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4" Jan 29 17:07:36 crc kubenswrapper[4886]: I0129 17:07:36.835928 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4"} err="failed to get container status \"6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4\": rpc error: code = NotFound desc = could not find container \"6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4\": container with ID starting with 6528db29d7d5821f74fc120a90a127f94065eb87d3cb30310e3e2849cde918e4 not found: ID does not exist" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.324927 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle\") pod \"87986c31-37d7-4624-87a2-b5678e01d865\" (UID: \"87986c31-37d7-4624-87a2-b5678e01d865\") " Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.344534 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87986c31-37d7-4624-87a2-b5678e01d865" (UID: "87986c31-37d7-4624-87a2-b5678e01d865"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.427862 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87986c31-37d7-4624-87a2-b5678e01d865-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.557773 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.582930 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.594399 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:07:37 crc kubenswrapper[4886]: E0129 17:07:37.594845 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="proxy-httpd" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.594862 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="proxy-httpd" Jan 29 17:07:37 crc kubenswrapper[4886]: E0129 17:07:37.594891 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="sg-core" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.594898 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="sg-core" Jan 29 17:07:37 crc kubenswrapper[4886]: E0129 17:07:37.594910 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-notification-agent" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.594920 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-notification-agent" Jan 29 17:07:37 crc kubenswrapper[4886]: E0129 17:07:37.594936 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-central-agent" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.594942 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-central-agent" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.595187 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="proxy-httpd" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.595207 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="sg-core" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.595222 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-notification-agent" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.595235 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87986c31-37d7-4624-87a2-b5678e01d865" containerName="ceilometer-central-agent" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.597790 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.604510 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.604531 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.615491 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648316 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-log-httpd\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648417 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-run-httpd\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-scripts\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648490 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-config-data\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648531 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.648560 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkf6\" (UniqueName: \"kubernetes.io/projected/24e9fd03-4a7f-45c7-83e6-608ad7648766-kube-api-access-5kkf6\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.673045 4886 generic.go:334] "Generic (PLEG): container finished" podID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerID="d6011c232b01e3892826684cea65e05a2b5a15c43a2d859d545b9c20ac294a14" exitCode=0 Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.673098 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" event={"ID":"9ac97bdb-475a-4061-96b0-1423be10bb5b","Type":"ContainerDied","Data":"d6011c232b01e3892826684cea65e05a2b5a15c43a2d859d545b9c20ac294a14"} Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.674507 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" event={"ID":"ea36feff-2438-49e4-b779-0b083addd0a8","Type":"ContainerStarted","Data":"8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2"} Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.674545 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" event={"ID":"ea36feff-2438-49e4-b779-0b083addd0a8","Type":"ContainerStarted","Data":"f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5"} Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.674601 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.674626 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.721783 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" podStartSLOduration=2.721760349 podStartE2EDuration="2.721760349s" podCreationTimestamp="2026-01-29 17:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:37.714531746 +0000 UTC m=+2740.623251038" watchObservedRunningTime="2026-01-29 17:07:37.721760349 +0000 UTC m=+2740.630479621" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.750481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-scripts\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.750579 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.751542 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-config-data\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.751574 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.751614 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkf6\" (UniqueName: \"kubernetes.io/projected/24e9fd03-4a7f-45c7-83e6-608ad7648766-kube-api-access-5kkf6\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.751774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-log-httpd\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.751857 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-run-httpd\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.752280 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-run-httpd\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.753519 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-log-httpd\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.757095 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-config-data\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.760505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.768570 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.769058 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkf6\" (UniqueName: \"kubernetes.io/projected/24e9fd03-4a7f-45c7-83e6-608ad7648766-kube-api-access-5kkf6\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.784212 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-scripts\") pod \"ceilometer-0\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " pod="openstack/ceilometer-0" Jan 29 17:07:37 crc kubenswrapper[4886]: I0129 17:07:37.951125 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.036681 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fb894ff6d-w7s26"] Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.038790 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.050926 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.051155 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.054851 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fb894ff6d-w7s26"] Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-internal-tls-certs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161227 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87936a5-19e1-4a58-948f-1f569c08bb6b-logs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-public-tls-certs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161545 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw87g\" (UniqueName: \"kubernetes.io/projected/b87936a5-19e1-4a58-948f-1f569c08bb6b-kube-api-access-fw87g\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161567 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-config-data-custom\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161750 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-combined-ca-bundle\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.161880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-config-data\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.217964 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6nmwn" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.263528 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-combined-ca-bundle\") pod \"a0058f32-ae80-4dde-9dce-095c62f45979\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.263582 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-config-data\") pod \"a0058f32-ae80-4dde-9dce-095c62f45979\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.263645 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v7hl\" (UniqueName: \"kubernetes.io/projected/a0058f32-ae80-4dde-9dce-095c62f45979-kube-api-access-9v7hl\") pod \"a0058f32-ae80-4dde-9dce-095c62f45979\" (UID: \"a0058f32-ae80-4dde-9dce-095c62f45979\") " Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264044 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-config-data\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-internal-tls-certs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87936a5-19e1-4a58-948f-1f569c08bb6b-logs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-public-tls-certs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw87g\" (UniqueName: \"kubernetes.io/projected/b87936a5-19e1-4a58-948f-1f569c08bb6b-kube-api-access-fw87g\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264255 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-config-data-custom\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.264377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-combined-ca-bundle\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.265041 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87936a5-19e1-4a58-948f-1f569c08bb6b-logs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.269517 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-internal-tls-certs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.270887 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-combined-ca-bundle\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.271435 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-config-data-custom\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.272724 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-config-data\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.274815 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87936a5-19e1-4a58-948f-1f569c08bb6b-public-tls-certs\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.275556 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0058f32-ae80-4dde-9dce-095c62f45979-kube-api-access-9v7hl" (OuterVolumeSpecName: "kube-api-access-9v7hl") pod "a0058f32-ae80-4dde-9dce-095c62f45979" (UID: "a0058f32-ae80-4dde-9dce-095c62f45979"). InnerVolumeSpecName "kube-api-access-9v7hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.288105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw87g\" (UniqueName: \"kubernetes.io/projected/b87936a5-19e1-4a58-948f-1f569c08bb6b-kube-api-access-fw87g\") pod \"barbican-api-5fb894ff6d-w7s26\" (UID: \"b87936a5-19e1-4a58-948f-1f569c08bb6b\") " pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.306164 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0058f32-ae80-4dde-9dce-095c62f45979" (UID: "a0058f32-ae80-4dde-9dce-095c62f45979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.367990 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.368021 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v7hl\" (UniqueName: \"kubernetes.io/projected/a0058f32-ae80-4dde-9dce-095c62f45979-kube-api-access-9v7hl\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.375239 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-config-data" (OuterVolumeSpecName: "config-data") pod "a0058f32-ae80-4dde-9dce-095c62f45979" (UID: "a0058f32-ae80-4dde-9dce-095c62f45979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.470845 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0058f32-ae80-4dde-9dce-095c62f45979-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.504568 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:38 crc kubenswrapper[4886]: W0129 17:07:38.550503 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e9fd03_4a7f_45c7_83e6_608ad7648766.slice/crio-92751cfdf549c65a3a37a865694b9ce91879a5f41c663c775080337b3acc7481 WatchSource:0}: Error finding container 92751cfdf549c65a3a37a865694b9ce91879a5f41c663c775080337b3acc7481: Status 404 returned error can't find the container with id 92751cfdf549c65a3a37a865694b9ce91879a5f41c663c775080337b3acc7481 Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.555076 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.644019 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87986c31-37d7-4624-87a2-b5678e01d865" path="/var/lib/kubelet/pods/87986c31-37d7-4624-87a2-b5678e01d865/volumes" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.694044 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6nmwn" event={"ID":"a0058f32-ae80-4dde-9dce-095c62f45979","Type":"ContainerDied","Data":"d9df74376035a2b4e196d856e8d76469a75a91514ac671f314bd4926926ee2e3"} Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.694091 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9df74376035a2b4e196d856e8d76469a75a91514ac671f314bd4926926ee2e3" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.694058 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6nmwn" Jan 29 17:07:38 crc kubenswrapper[4886]: I0129 17:07:38.696802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerStarted","Data":"92751cfdf549c65a3a37a865694b9ce91879a5f41c663c775080337b3acc7481"} Jan 29 17:07:39 crc kubenswrapper[4886]: I0129 17:07:39.030931 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fb894ff6d-w7s26"] Jan 29 17:07:39 crc kubenswrapper[4886]: I0129 17:07:39.708446 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" event={"ID":"9ac97bdb-475a-4061-96b0-1423be10bb5b","Type":"ContainerStarted","Data":"a528683376327e5804a4ea1ec553e70518415fe775e3feb358ab1099f935a1fb"} Jan 29 17:07:39 crc kubenswrapper[4886]: I0129 17:07:39.708796 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:39 crc kubenswrapper[4886]: I0129 17:07:39.734773 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" podStartSLOduration=5.734753896 podStartE2EDuration="5.734753896s" podCreationTimestamp="2026-01-29 17:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:39.725469865 +0000 UTC m=+2742.634189137" watchObservedRunningTime="2026-01-29 17:07:39.734753896 +0000 UTC m=+2742.643473168" Jan 29 17:07:39 crc kubenswrapper[4886]: W0129 17:07:39.848455 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb87936a5_19e1_4a58_948f_1f569c08bb6b.slice/crio-5670a05f4acee95bf1f3b5e9db23d52bf751ed4a054baf63e3e9aace49a37d13 WatchSource:0}: Error finding container 5670a05f4acee95bf1f3b5e9db23d52bf751ed4a054baf63e3e9aace49a37d13: Status 404 returned error can't find the container with id 5670a05f4acee95bf1f3b5e9db23d52bf751ed4a054baf63e3e9aace49a37d13 Jan 29 17:07:40 crc kubenswrapper[4886]: I0129 17:07:40.719799 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb894ff6d-w7s26" event={"ID":"b87936a5-19e1-4a58-948f-1f569c08bb6b","Type":"ContainerStarted","Data":"5670a05f4acee95bf1f3b5e9db23d52bf751ed4a054baf63e3e9aace49a37d13"} Jan 29 17:07:41 crc kubenswrapper[4886]: I0129 17:07:41.732494 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb894ff6d-w7s26" event={"ID":"b87936a5-19e1-4a58-948f-1f569c08bb6b","Type":"ContainerStarted","Data":"75e585ba6e31872c391d4f021d333f4dc8414bf7f94dc2577e762cfee1d307f3"} Jan 29 17:07:41 crc kubenswrapper[4886]: I0129 17:07:41.734964 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerStarted","Data":"472df94bcf2c9160f704fb8f0e7681c07c27ea44d994460b0bfef6434e9a5bfa"} Jan 29 17:07:41 crc kubenswrapper[4886]: I0129 17:07:41.736584 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f4657cb95-4tfvc" event={"ID":"8f83894a-73ec-405a-bdd2-2044b3f9140a","Type":"ContainerStarted","Data":"4f672b9ba40814a9dd3c3a838059715e007cf5e911ed8e940e56c86de2273636"} Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.752470 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb894ff6d-w7s26" event={"ID":"b87936a5-19e1-4a58-948f-1f569c08bb6b","Type":"ContainerStarted","Data":"361c448a89a664088cb620037ad2edbb0b1c2b53501090897700deca3cf05ec1"} Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.753088 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.753127 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.776597 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerStarted","Data":"1bdf46565ca1048aaf33d2e55676cc44132df701332d9cac871024cf7e0601b1"} Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.792039 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fb894ff6d-w7s26" podStartSLOduration=5.792021396 podStartE2EDuration="5.792021396s" podCreationTimestamp="2026-01-29 17:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:42.786679735 +0000 UTC m=+2745.695399007" watchObservedRunningTime="2026-01-29 17:07:42.792021396 +0000 UTC m=+2745.700740668" Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.793392 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f4657cb95-4tfvc" event={"ID":"8f83894a-73ec-405a-bdd2-2044b3f9140a","Type":"ContainerStarted","Data":"a134eb869b542799f5b8ee4915f6e2f42dae3c4d8dc9c506e22973bc89774628"} Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.800456 4886 generic.go:334] "Generic (PLEG): container finished" podID="04dae116-ceca-4588-9cba-1266bfa92caf" containerID="09a30c5dfcb3deacf09e3ccec1c515a8213db072a4cbe06ac44ba60b9a7d0159" exitCode=0 Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.800531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j5gfz" event={"ID":"04dae116-ceca-4588-9cba-1266bfa92caf","Type":"ContainerDied","Data":"09a30c5dfcb3deacf09e3ccec1c515a8213db072a4cbe06ac44ba60b9a7d0159"} Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.815519 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f4657cb95-4tfvc" podStartSLOduration=4.253467625 podStartE2EDuration="8.815499717s" podCreationTimestamp="2026-01-29 17:07:34 +0000 UTC" firstStartedPulling="2026-01-29 17:07:35.919972361 +0000 UTC m=+2738.828691623" lastFinishedPulling="2026-01-29 17:07:40.482004443 +0000 UTC m=+2743.390723715" observedRunningTime="2026-01-29 17:07:42.812077731 +0000 UTC m=+2745.720797003" watchObservedRunningTime="2026-01-29 17:07:42.815499717 +0000 UTC m=+2745.724218979" Jan 29 17:07:42 crc kubenswrapper[4886]: I0129 17:07:42.822990 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" event={"ID":"054e527c-8ce1-4d03-8fef-0430934daba3","Type":"ContainerStarted","Data":"a8e7e2bd6b3cda1bfc8f6441f00f6807a7324ed4d8e27f36ee1ce8a6f9f49cfe"} Jan 29 17:07:43 crc kubenswrapper[4886]: I0129 17:07:43.846771 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" event={"ID":"054e527c-8ce1-4d03-8fef-0430934daba3","Type":"ContainerStarted","Data":"39b79bf84cb88167d5c8bac93b91dc7b502f104f2e0d2c0fcc75c3fc93973f4e"} Jan 29 17:07:43 crc kubenswrapper[4886]: I0129 17:07:43.873216 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-85cc5d579d-jhqqd" podStartSLOduration=3.697965619 podStartE2EDuration="9.873195628s" podCreationTimestamp="2026-01-29 17:07:34 +0000 UTC" firstStartedPulling="2026-01-29 17:07:36.042686707 +0000 UTC m=+2738.951405979" lastFinishedPulling="2026-01-29 17:07:42.217916706 +0000 UTC m=+2745.126635988" observedRunningTime="2026-01-29 17:07:43.865215353 +0000 UTC m=+2746.773934645" watchObservedRunningTime="2026-01-29 17:07:43.873195628 +0000 UTC m=+2746.781914910" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.343283 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.429293 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-config-data\") pod \"04dae116-ceca-4588-9cba-1266bfa92caf\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.429461 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rkdq\" (UniqueName: \"kubernetes.io/projected/04dae116-ceca-4588-9cba-1266bfa92caf-kube-api-access-2rkdq\") pod \"04dae116-ceca-4588-9cba-1266bfa92caf\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.429528 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-scripts\") pod \"04dae116-ceca-4588-9cba-1266bfa92caf\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.429629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-db-sync-config-data\") pod \"04dae116-ceca-4588-9cba-1266bfa92caf\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.429822 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-combined-ca-bundle\") pod \"04dae116-ceca-4588-9cba-1266bfa92caf\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.429920 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04dae116-ceca-4588-9cba-1266bfa92caf-etc-machine-id\") pod \"04dae116-ceca-4588-9cba-1266bfa92caf\" (UID: \"04dae116-ceca-4588-9cba-1266bfa92caf\") " Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.430642 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dae116-ceca-4588-9cba-1266bfa92caf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04dae116-ceca-4588-9cba-1266bfa92caf" (UID: "04dae116-ceca-4588-9cba-1266bfa92caf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.435215 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04dae116-ceca-4588-9cba-1266bfa92caf-kube-api-access-2rkdq" (OuterVolumeSpecName: "kube-api-access-2rkdq") pod "04dae116-ceca-4588-9cba-1266bfa92caf" (UID: "04dae116-ceca-4588-9cba-1266bfa92caf"). InnerVolumeSpecName "kube-api-access-2rkdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.435593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-scripts" (OuterVolumeSpecName: "scripts") pod "04dae116-ceca-4588-9cba-1266bfa92caf" (UID: "04dae116-ceca-4588-9cba-1266bfa92caf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.439438 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "04dae116-ceca-4588-9cba-1266bfa92caf" (UID: "04dae116-ceca-4588-9cba-1266bfa92caf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.472110 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04dae116-ceca-4588-9cba-1266bfa92caf" (UID: "04dae116-ceca-4588-9cba-1266bfa92caf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.498579 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-config-data" (OuterVolumeSpecName: "config-data") pod "04dae116-ceca-4588-9cba-1266bfa92caf" (UID: "04dae116-ceca-4588-9cba-1266bfa92caf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.532478 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04dae116-ceca-4588-9cba-1266bfa92caf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.532523 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.532567 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rkdq\" (UniqueName: \"kubernetes.io/projected/04dae116-ceca-4588-9cba-1266bfa92caf-kube-api-access-2rkdq\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.532580 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.532592 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.532603 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04dae116-ceca-4588-9cba-1266bfa92caf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.861943 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerStarted","Data":"9d8e62602d1305f37f8a51b73f2c104ca86a67a3331fc3d826d42ccf0fac24ce"} Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.866058 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j5gfz" event={"ID":"04dae116-ceca-4588-9cba-1266bfa92caf","Type":"ContainerDied","Data":"3d72bfc601ef7f8aa44a162e8a49bc717daf618d327e886ac546527a7c3a7e17"} Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.866091 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j5gfz" Jan 29 17:07:44 crc kubenswrapper[4886]: I0129 17:07:44.866114 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d72bfc601ef7f8aa44a162e8a49bc717daf618d327e886ac546527a7c3a7e17" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.177036 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:07:45 crc kubenswrapper[4886]: E0129 17:07:45.177927 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dae116-ceca-4588-9cba-1266bfa92caf" containerName="cinder-db-sync" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.177944 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dae116-ceca-4588-9cba-1266bfa92caf" containerName="cinder-db-sync" Jan 29 17:07:45 crc kubenswrapper[4886]: E0129 17:07:45.177985 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0058f32-ae80-4dde-9dce-095c62f45979" containerName="heat-db-sync" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.177993 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0058f32-ae80-4dde-9dce-095c62f45979" containerName="heat-db-sync" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.178241 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0058f32-ae80-4dde-9dce-095c62f45979" containerName="heat-db-sync" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.178259 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dae116-ceca-4588-9cba-1266bfa92caf" containerName="cinder-db-sync" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.179735 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.188970 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.189164 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ldtkt" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.189263 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.189396 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.189668 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.260762 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.260811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.260921 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79744cfd-ecdc-42c4-b70e-bb957640a11c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.260940 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzlm\" (UniqueName: \"kubernetes.io/projected/79744cfd-ecdc-42c4-b70e-bb957640a11c-kube-api-access-zrzlm\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.261011 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-scripts\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.261028 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.332760 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-jsg5q"] Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.332980 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerName="dnsmasq-dns" containerID="cri-o://a528683376327e5804a4ea1ec553e70518415fe775e3feb358ab1099f935a1fb" gracePeriod=10 Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.340043 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.369731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.369777 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.369882 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79744cfd-ecdc-42c4-b70e-bb957640a11c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.369900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzlm\" (UniqueName: \"kubernetes.io/projected/79744cfd-ecdc-42c4-b70e-bb957640a11c-kube-api-access-zrzlm\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.369964 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-scripts\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.369980 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.372423 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79744cfd-ecdc-42c4-b70e-bb957640a11c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.383053 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.396972 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-dv5ch"] Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.404902 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.407851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.408258 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.418972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzlm\" (UniqueName: \"kubernetes.io/projected/79744cfd-ecdc-42c4-b70e-bb957640a11c-kube-api-access-zrzlm\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.422683 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-dv5ch"] Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.423915 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-scripts\") pod \"cinder-scheduler-0\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.492882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-config\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.493002 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.493188 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.493307 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm767\" (UniqueName: \"kubernetes.io/projected/a4e533f1-e8eb-4426-906e-35354266d610-kube-api-access-rm767\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.493350 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.493395 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.513002 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.569376 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.571206 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.629377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.629864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.629975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.629996 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm767\" (UniqueName: \"kubernetes.io/projected/a4e533f1-e8eb-4426-906e-35354266d610-kube-api-access-rm767\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.630041 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.630210 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-config\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.631262 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-config\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.631832 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.632819 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.638489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.643008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.650368 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.642052 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.708583 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm767\" (UniqueName: \"kubernetes.io/projected/a4e533f1-e8eb-4426-906e-35354266d610-kube-api-access-rm767\") pod \"dnsmasq-dns-795f4db4bc-dv5ch\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.732846 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.732962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-scripts\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.732981 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-logs\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.733005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769bq\" (UniqueName: \"kubernetes.io/projected/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-kube-api-access-769bq\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.733067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.733195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.733212 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.835589 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.835728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.835729 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.835751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.836049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.836213 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-scripts\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.836240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-logs\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.836312 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-769bq\" (UniqueName: \"kubernetes.io/projected/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-kube-api-access-769bq\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.837088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-logs\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.849694 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.852082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-scripts\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.852759 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.867644 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.867791 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-769bq\" (UniqueName: \"kubernetes.io/projected/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-kube-api-access-769bq\") pod \"cinder-api-0\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " pod="openstack/cinder-api-0" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.970378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:45 crc kubenswrapper[4886]: I0129 17:07:45.976137 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.195317 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.529744 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-dv5ch"] Jan 29 17:07:46 crc kubenswrapper[4886]: W0129 17:07:46.534253 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e533f1_e8eb_4426_906e_35354266d610.slice/crio-9b0b0e72dbfa9a690950a4cb5f65710c32c08a1c18a1d00cb2ec594ac0b3c616 WatchSource:0}: Error finding container 9b0b0e72dbfa9a690950a4cb5f65710c32c08a1c18a1d00cb2ec594ac0b3c616: Status 404 returned error can't find the container with id 9b0b0e72dbfa9a690950a4cb5f65710c32c08a1c18a1d00cb2ec594ac0b3c616 Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.725476 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.917596 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2","Type":"ContainerStarted","Data":"281f6c4ddc5b493d23b42767bfd856396f39345c359337095a814d651f657b39"} Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.938513 4886 generic.go:334] "Generic (PLEG): container finished" podID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerID="a528683376327e5804a4ea1ec553e70518415fe775e3feb358ab1099f935a1fb" exitCode=0 Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.938581 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" event={"ID":"9ac97bdb-475a-4061-96b0-1423be10bb5b","Type":"ContainerDied","Data":"a528683376327e5804a4ea1ec553e70518415fe775e3feb358ab1099f935a1fb"} Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.945684 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" event={"ID":"a4e533f1-e8eb-4426-906e-35354266d610","Type":"ContainerStarted","Data":"9b0b0e72dbfa9a690950a4cb5f65710c32c08a1c18a1d00cb2ec594ac0b3c616"} Jan 29 17:07:46 crc kubenswrapper[4886]: I0129 17:07:46.953343 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79744cfd-ecdc-42c4-b70e-bb957640a11c","Type":"ContainerStarted","Data":"eb5bacab0ef6b5257f3ba5127165c9496314e35a73af62c8e260a0b9866372e0"} Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.277277 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.318394 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.386172 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-config\") pod \"9ac97bdb-475a-4061-96b0-1423be10bb5b\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.386229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-nb\") pod \"9ac97bdb-475a-4061-96b0-1423be10bb5b\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.386354 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-svc\") pod \"9ac97bdb-475a-4061-96b0-1423be10bb5b\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.386377 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-swift-storage-0\") pod \"9ac97bdb-475a-4061-96b0-1423be10bb5b\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.386554 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-sb\") pod \"9ac97bdb-475a-4061-96b0-1423be10bb5b\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.386611 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzzbl\" (UniqueName: \"kubernetes.io/projected/9ac97bdb-475a-4061-96b0-1423be10bb5b-kube-api-access-tzzbl\") pod \"9ac97bdb-475a-4061-96b0-1423be10bb5b\" (UID: \"9ac97bdb-475a-4061-96b0-1423be10bb5b\") " Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.402344 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac97bdb-475a-4061-96b0-1423be10bb5b-kube-api-access-tzzbl" (OuterVolumeSpecName: "kube-api-access-tzzbl") pod "9ac97bdb-475a-4061-96b0-1423be10bb5b" (UID: "9ac97bdb-475a-4061-96b0-1423be10bb5b"). InnerVolumeSpecName "kube-api-access-tzzbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.464822 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ac97bdb-475a-4061-96b0-1423be10bb5b" (UID: "9ac97bdb-475a-4061-96b0-1423be10bb5b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.504971 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzzbl\" (UniqueName: \"kubernetes.io/projected/9ac97bdb-475a-4061-96b0-1423be10bb5b-kube-api-access-tzzbl\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.505217 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.558731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-config" (OuterVolumeSpecName: "config") pod "9ac97bdb-475a-4061-96b0-1423be10bb5b" (UID: "9ac97bdb-475a-4061-96b0-1423be10bb5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.578082 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ac97bdb-475a-4061-96b0-1423be10bb5b" (UID: "9ac97bdb-475a-4061-96b0-1423be10bb5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.598027 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ac97bdb-475a-4061-96b0-1423be10bb5b" (UID: "9ac97bdb-475a-4061-96b0-1423be10bb5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.615410 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.615443 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.615455 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.625065 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ac97bdb-475a-4061-96b0-1423be10bb5b" (UID: "9ac97bdb-475a-4061-96b0-1423be10bb5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.717928 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ac97bdb-475a-4061-96b0-1423be10bb5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.968678 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2","Type":"ContainerStarted","Data":"b8d0ea03cf6cf69b26bdf55d5de8b0049bbdd593eaf6801f03f5d5761e184e45"} Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.971040 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.971051 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-jsg5q" event={"ID":"9ac97bdb-475a-4061-96b0-1423be10bb5b","Type":"ContainerDied","Data":"1724f7bc6805ebdf2ea8515900b97a42430de51ca57fd28deec62f818f0909c2"} Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.971278 4886 scope.go:117] "RemoveContainer" containerID="a528683376327e5804a4ea1ec553e70518415fe775e3feb358ab1099f935a1fb" Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.980761 4886 generic.go:334] "Generic (PLEG): container finished" podID="a4e533f1-e8eb-4426-906e-35354266d610" containerID="2012816a934b66e60ffd90c59e1fa261b396b239468adba78a0dedfe4395c1be" exitCode=0 Jan 29 17:07:47 crc kubenswrapper[4886]: I0129 17:07:47.980814 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" event={"ID":"a4e533f1-e8eb-4426-906e-35354266d610","Type":"ContainerDied","Data":"2012816a934b66e60ffd90c59e1fa261b396b239468adba78a0dedfe4395c1be"} Jan 29 17:07:48 crc kubenswrapper[4886]: I0129 17:07:48.030366 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-jsg5q"] Jan 29 17:07:48 crc kubenswrapper[4886]: I0129 17:07:48.040114 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-jsg5q"] Jan 29 17:07:48 crc kubenswrapper[4886]: I0129 17:07:48.346035 4886 scope.go:117] "RemoveContainer" containerID="d6011c232b01e3892826684cea65e05a2b5a15c43a2d859d545b9c20ac294a14" Jan 29 17:07:48 crc kubenswrapper[4886]: I0129 17:07:48.492850 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:48 crc kubenswrapper[4886]: I0129 17:07:48.513536 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:48 crc kubenswrapper[4886]: I0129 17:07:48.652465 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" path="/var/lib/kubelet/pods/9ac97bdb-475a-4061-96b0-1423be10bb5b/volumes" Jan 29 17:07:49 crc kubenswrapper[4886]: I0129 17:07:49.002693 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5499bdc9-q6hr4" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.107312 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 17:07:50 crc kubenswrapper[4886]: E0129 17:07:50.120176 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerName="init" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.120193 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerName="init" Jan 29 17:07:50 crc kubenswrapper[4886]: E0129 17:07:50.120229 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerName="dnsmasq-dns" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.120235 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerName="dnsmasq-dns" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.120476 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac97bdb-475a-4061-96b0-1423be10bb5b" containerName="dnsmasq-dns" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.121285 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.127274 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jq45j" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.127529 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.128442 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.167428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79744cfd-ecdc-42c4-b70e-bb957640a11c","Type":"ContainerStarted","Data":"dd01b92d286ab63ee03bff172b9b03aa69d2a7db780bc4a7761f9cf8e7790134"} Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.181965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2","Type":"ContainerStarted","Data":"427b1632fa7330e8e999fa999675e7326ae042f6f381126c9b2276f118bf9b8f"} Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.182212 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api-log" containerID="cri-o://b8d0ea03cf6cf69b26bdf55d5de8b0049bbdd593eaf6801f03f5d5761e184e45" gracePeriod=30 Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.182310 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.182496 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api" containerID="cri-o://427b1632fa7330e8e999fa999675e7326ae042f6f381126c9b2276f118bf9b8f" gracePeriod=30 Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.199266 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.218669 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be43aab6-3888-4260-a85c-147e2ae0a36d-openstack-config-secret\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.218713 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be43aab6-3888-4260-a85c-147e2ae0a36d-openstack-config\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.218737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43aab6-3888-4260-a85c-147e2ae0a36d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.218817 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4t4b\" (UniqueName: \"kubernetes.io/projected/be43aab6-3888-4260-a85c-147e2ae0a36d-kube-api-access-l4t4b\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.225567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" event={"ID":"a4e533f1-e8eb-4426-906e-35354266d610","Type":"ContainerStarted","Data":"bfb4e65e7631317b75e0b15c39b90031add550dcb40292d0be47c6410cfdc89e"} Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.226936 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.250549 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.25052849 podStartE2EDuration="5.25052849s" podCreationTimestamp="2026-01-29 17:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:50.213887938 +0000 UTC m=+2753.122607210" watchObservedRunningTime="2026-01-29 17:07:50.25052849 +0000 UTC m=+2753.159247762" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.288686 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" podStartSLOduration=5.288565661 podStartE2EDuration="5.288565661s" podCreationTimestamp="2026-01-29 17:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:50.245275182 +0000 UTC m=+2753.153994454" watchObservedRunningTime="2026-01-29 17:07:50.288565661 +0000 UTC m=+2753.197284933" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.323635 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be43aab6-3888-4260-a85c-147e2ae0a36d-openstack-config-secret\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.323675 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be43aab6-3888-4260-a85c-147e2ae0a36d-openstack-config\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.323705 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43aab6-3888-4260-a85c-147e2ae0a36d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.323800 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4t4b\" (UniqueName: \"kubernetes.io/projected/be43aab6-3888-4260-a85c-147e2ae0a36d-kube-api-access-l4t4b\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.325490 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be43aab6-3888-4260-a85c-147e2ae0a36d-openstack-config\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.336019 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be43aab6-3888-4260-a85c-147e2ae0a36d-openstack-config-secret\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.346001 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43aab6-3888-4260-a85c-147e2ae0a36d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.358901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4t4b\" (UniqueName: \"kubernetes.io/projected/be43aab6-3888-4260-a85c-147e2ae0a36d-kube-api-access-l4t4b\") pod \"openstackclient\" (UID: \"be43aab6-3888-4260-a85c-147e2ae0a36d\") " pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.482137 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.844177 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:50 crc kubenswrapper[4886]: I0129 17:07:50.846885 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-795d8c76d8-x2zqv" Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.253456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79744cfd-ecdc-42c4-b70e-bb957640a11c","Type":"ContainerStarted","Data":"3d38ab3f39b8f10e80b68dcbf56b94dd2483224e667fea1a1a75ada7c0ecf901"} Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.259829 4886 generic.go:334] "Generic (PLEG): container finished" podID="43da0665-7e6a-4176-ae84-71128a89a243" containerID="c4ce1f7996acaa4140e3f499ede2bc0c80a3f2eb7c1df999e0b4f5903e1d75cf" exitCode=0 Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.259878 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qglhp" event={"ID":"43da0665-7e6a-4176-ae84-71128a89a243","Type":"ContainerDied","Data":"c4ce1f7996acaa4140e3f499ede2bc0c80a3f2eb7c1df999e0b4f5903e1d75cf"} Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.288711 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.194423043 podStartE2EDuration="6.288692071s" podCreationTimestamp="2026-01-29 17:07:45 +0000 UTC" firstStartedPulling="2026-01-29 17:07:46.253077438 +0000 UTC m=+2749.161796710" lastFinishedPulling="2026-01-29 17:07:48.347346456 +0000 UTC m=+2751.256065738" observedRunningTime="2026-01-29 17:07:51.276255941 +0000 UTC m=+2754.184975233" watchObservedRunningTime="2026-01-29 17:07:51.288692071 +0000 UTC m=+2754.197411343" Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.301123 4886 generic.go:334] "Generic (PLEG): container finished" podID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerID="427b1632fa7330e8e999fa999675e7326ae042f6f381126c9b2276f118bf9b8f" exitCode=0 Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.301150 4886 generic.go:334] "Generic (PLEG): container finished" podID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerID="b8d0ea03cf6cf69b26bdf55d5de8b0049bbdd593eaf6801f03f5d5761e184e45" exitCode=143 Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.302174 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2","Type":"ContainerDied","Data":"427b1632fa7330e8e999fa999675e7326ae042f6f381126c9b2276f118bf9b8f"} Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.302199 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2","Type":"ContainerDied","Data":"b8d0ea03cf6cf69b26bdf55d5de8b0049bbdd593eaf6801f03f5d5761e184e45"} Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.391682 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.714417 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 17:07:51 crc kubenswrapper[4886]: W0129 17:07:51.720256 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe43aab6_3888_4260_a85c_147e2ae0a36d.slice/crio-f55162e34c8bb62e3e1744e4e6436f51562cbf1a2bd6ce27de003f68256e0764 WatchSource:0}: Error finding container f55162e34c8bb62e3e1744e4e6436f51562cbf1a2bd6ce27de003f68256e0764: Status 404 returned error can't find the container with id f55162e34c8bb62e3e1744e4e6436f51562cbf1a2bd6ce27de003f68256e0764 Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.826362 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fb894ff6d-w7s26" Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.965084 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55f7ff7dd6-jj4jw"] Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.965510 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api-log" containerID="cri-o://f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5" gracePeriod=30 Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.965731 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api" containerID="cri-o://8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2" gracePeriod=30 Jan 29 17:07:51 crc kubenswrapper[4886]: I0129 17:07:51.989548 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.099966 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-logs\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.107799 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data-custom\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.107932 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-combined-ca-bundle\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.108023 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-scripts\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.108138 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.108271 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-etc-machine-id\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.108398 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-769bq\" (UniqueName: \"kubernetes.io/projected/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-kube-api-access-769bq\") pod \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\" (UID: \"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2\") " Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.100788 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-logs" (OuterVolumeSpecName: "logs") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.114573 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.120744 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-scripts" (OuterVolumeSpecName: "scripts") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.143300 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.150548 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-kube-api-access-769bq" (OuterVolumeSpecName: "kube-api-access-769bq") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "kube-api-access-769bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.210637 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-769bq\" (UniqueName: \"kubernetes.io/projected/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-kube-api-access-769bq\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.210668 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.210679 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.210687 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.210696 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.283481 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.314080 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.344171 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc58d1b4-0d5e-4768-9a82-b6bbcca420a2","Type":"ContainerDied","Data":"281f6c4ddc5b493d23b42767bfd856396f39345c359337095a814d651f657b39"} Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.344222 4886 scope.go:117] "RemoveContainer" containerID="427b1632fa7330e8e999fa999675e7326ae042f6f381126c9b2276f118bf9b8f" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.344218 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.365760 4886 generic.go:334] "Generic (PLEG): container finished" podID="ea36feff-2438-49e4-b779-0b083addd0a8" containerID="f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5" exitCode=143 Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.365870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" event={"ID":"ea36feff-2438-49e4-b779-0b083addd0a8","Type":"ContainerDied","Data":"f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5"} Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.370068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"be43aab6-3888-4260-a85c-147e2ae0a36d","Type":"ContainerStarted","Data":"f55162e34c8bb62e3e1744e4e6436f51562cbf1a2bd6ce27de003f68256e0764"} Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.391825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data" (OuterVolumeSpecName: "config-data") pod "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" (UID: "cc58d1b4-0d5e-4768-9a82-b6bbcca420a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.406717 4886 scope.go:117] "RemoveContainer" containerID="b8d0ea03cf6cf69b26bdf55d5de8b0049bbdd593eaf6801f03f5d5761e184e45" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.421766 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.733746 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.766396 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.777701 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:52 crc kubenswrapper[4886]: E0129 17:07:52.778382 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api-log" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.778469 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api-log" Jan 29 17:07:52 crc kubenswrapper[4886]: E0129 17:07:52.778540 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.778592 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.778865 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.778949 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" containerName="cinder-api-log" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.780174 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.783611 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.783786 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.784510 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.790912 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.828009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-config-data-custom\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.828910 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3573eaa4-4c27-4747-a691-15ae61d152f3-logs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.828975 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.829039 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.829058 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4j4v\" (UniqueName: \"kubernetes.io/projected/3573eaa4-4c27-4747-a691-15ae61d152f3-kube-api-access-v4j4v\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.829079 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-config-data\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.829153 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.829220 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-scripts\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.829247 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3573eaa4-4c27-4747-a691-15ae61d152f3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.928709 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qglhp" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.931235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-config-data-custom\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.931508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3573eaa4-4c27-4747-a691-15ae61d152f3-logs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.931644 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.931780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.931914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4j4v\" (UniqueName: \"kubernetes.io/projected/3573eaa4-4c27-4747-a691-15ae61d152f3-kube-api-access-v4j4v\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.932032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-config-data\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.932214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.932380 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-scripts\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.932510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3573eaa4-4c27-4747-a691-15ae61d152f3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.932781 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3573eaa4-4c27-4747-a691-15ae61d152f3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.933507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3573eaa4-4c27-4747-a691-15ae61d152f3-logs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.947217 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.951374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-scripts\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.952173 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-config-data-custom\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.952982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.966029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4j4v\" (UniqueName: \"kubernetes.io/projected/3573eaa4-4c27-4747-a691-15ae61d152f3-kube-api-access-v4j4v\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.966267 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:52 crc kubenswrapper[4886]: I0129 17:07:52.966924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3573eaa4-4c27-4747-a691-15ae61d152f3-config-data\") pod \"cinder-api-0\" (UID: \"3573eaa4-4c27-4747-a691-15ae61d152f3\") " pod="openstack/cinder-api-0" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.131909 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.136162 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkvgz\" (UniqueName: \"kubernetes.io/projected/43da0665-7e6a-4176-ae84-71128a89a243-kube-api-access-vkvgz\") pod \"43da0665-7e6a-4176-ae84-71128a89a243\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.136209 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-combined-ca-bundle\") pod \"43da0665-7e6a-4176-ae84-71128a89a243\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.136315 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-config\") pod \"43da0665-7e6a-4176-ae84-71128a89a243\" (UID: \"43da0665-7e6a-4176-ae84-71128a89a243\") " Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.154593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43da0665-7e6a-4176-ae84-71128a89a243-kube-api-access-vkvgz" (OuterVolumeSpecName: "kube-api-access-vkvgz") pod "43da0665-7e6a-4176-ae84-71128a89a243" (UID: "43da0665-7e6a-4176-ae84-71128a89a243"). InnerVolumeSpecName "kube-api-access-vkvgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.176191 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-config" (OuterVolumeSpecName: "config") pod "43da0665-7e6a-4176-ae84-71128a89a243" (UID: "43da0665-7e6a-4176-ae84-71128a89a243"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.222450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43da0665-7e6a-4176-ae84-71128a89a243" (UID: "43da0665-7e6a-4176-ae84-71128a89a243"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.239839 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkvgz\" (UniqueName: \"kubernetes.io/projected/43da0665-7e6a-4176-ae84-71128a89a243-kube-api-access-vkvgz\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.240674 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.240692 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/43da0665-7e6a-4176-ae84-71128a89a243-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.391964 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qglhp" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.393580 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qglhp" event={"ID":"43da0665-7e6a-4176-ae84-71128a89a243","Type":"ContainerDied","Data":"466198a6dbe8073f38dde3862e5bfda50e204a4fc5dd98f6c616c1e63cc8d1a0"} Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.393653 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466198a6dbe8073f38dde3862e5bfda50e204a4fc5dd98f6c616c1e63cc8d1a0" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.409413 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerStarted","Data":"44a3542db94b31c96db714bd6c3559bd3e1d7d7a66d633f86abe33fb9a6f4bd0"} Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.410616 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.438500 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.251302755 podStartE2EDuration="16.438482011s" podCreationTimestamp="2026-01-29 17:07:37 +0000 UTC" firstStartedPulling="2026-01-29 17:07:38.553135205 +0000 UTC m=+2741.461854487" lastFinishedPulling="2026-01-29 17:07:51.740314471 +0000 UTC m=+2754.649033743" observedRunningTime="2026-01-29 17:07:53.436793273 +0000 UTC m=+2756.345512545" watchObservedRunningTime="2026-01-29 17:07:53.438482011 +0000 UTC m=+2756.347201283" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.523948 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-dv5ch"] Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.524171 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" podUID="a4e533f1-e8eb-4426-906e-35354266d610" containerName="dnsmasq-dns" containerID="cri-o://bfb4e65e7631317b75e0b15c39b90031add550dcb40292d0be47c6410cfdc89e" gracePeriod=10 Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.586585 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lbcqc"] Jan 29 17:07:53 crc kubenswrapper[4886]: E0129 17:07:53.618408 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43da0665-7e6a-4176-ae84-71128a89a243" containerName="neutron-db-sync" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.618449 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="43da0665-7e6a-4176-ae84-71128a89a243" containerName="neutron-db-sync" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.649639 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="43da0665-7e6a-4176-ae84-71128a89a243" containerName="neutron-db-sync" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.660178 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.680372 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lbcqc"] Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.732567 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7854df7c4b-dn4j7"] Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.747734 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.751643 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wvjgr" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.751890 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.756457 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7854df7c4b-dn4j7"] Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.760576 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.764379 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.766260 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.766346 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwhj\" (UniqueName: \"kubernetes.io/projected/77e77908-f078-4711-8c40-5e0bbda2a830-kube-api-access-6rwhj\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.766397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.766415 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.766490 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-config\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.766510 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.807639 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870049 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-combined-ca-bundle\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870183 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwhj\" (UniqueName: \"kubernetes.io/projected/77e77908-f078-4711-8c40-5e0bbda2a830-kube-api-access-6rwhj\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870254 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-ovndb-tls-certs\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870314 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-config\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870364 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870384 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-httpd-config\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-config\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.870487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjq8\" (UniqueName: \"kubernetes.io/projected/0ff8b641-0d76-41ce-b6ac-7d708effebc0-kube-api-access-nhjq8\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.871300 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.872080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.874598 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.874946 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-config\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.875786 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.897758 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwhj\" (UniqueName: \"kubernetes.io/projected/77e77908-f078-4711-8c40-5e0bbda2a830-kube-api-access-6rwhj\") pod \"dnsmasq-dns-5c9776ccc5-lbcqc\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.973002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-combined-ca-bundle\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.973420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-ovndb-tls-certs\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.973493 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-httpd-config\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.973546 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-config\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.973582 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjq8\" (UniqueName: \"kubernetes.io/projected/0ff8b641-0d76-41ce-b6ac-7d708effebc0-kube-api-access-nhjq8\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:53 crc kubenswrapper[4886]: I0129 17:07:53.998604 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-ovndb-tls-certs\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.003171 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-httpd-config\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.014348 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-combined-ca-bundle\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.014473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-config\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.022961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjq8\" (UniqueName: \"kubernetes.io/projected/0ff8b641-0d76-41ce-b6ac-7d708effebc0-kube-api-access-nhjq8\") pod \"neutron-7854df7c4b-dn4j7\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.042873 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.089485 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.442547 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3573eaa4-4c27-4747-a691-15ae61d152f3","Type":"ContainerStarted","Data":"1385108d3e83430f45d60172a4f29a52c80dc5f81117e1d2b4da4da320eaf2a2"} Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.479842 4886 generic.go:334] "Generic (PLEG): container finished" podID="a4e533f1-e8eb-4426-906e-35354266d610" containerID="bfb4e65e7631317b75e0b15c39b90031add550dcb40292d0be47c6410cfdc89e" exitCode=0 Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.481104 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" event={"ID":"a4e533f1-e8eb-4426-906e-35354266d610","Type":"ContainerDied","Data":"bfb4e65e7631317b75e0b15c39b90031add550dcb40292d0be47c6410cfdc89e"} Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.481130 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" event={"ID":"a4e533f1-e8eb-4426-906e-35354266d610","Type":"ContainerDied","Data":"9b0b0e72dbfa9a690950a4cb5f65710c32c08a1c18a1d00cb2ec594ac0b3c616"} Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.481142 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0b0e72dbfa9a690950a4cb5f65710c32c08a1c18a1d00cb2ec594ac0b3c616" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.496544 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.595453 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-svc\") pod \"a4e533f1-e8eb-4426-906e-35354266d610\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.597305 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm767\" (UniqueName: \"kubernetes.io/projected/a4e533f1-e8eb-4426-906e-35354266d610-kube-api-access-rm767\") pod \"a4e533f1-e8eb-4426-906e-35354266d610\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.597825 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-config\") pod \"a4e533f1-e8eb-4426-906e-35354266d610\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.597926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-nb\") pod \"a4e533f1-e8eb-4426-906e-35354266d610\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.597986 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-sb\") pod \"a4e533f1-e8eb-4426-906e-35354266d610\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.598099 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-swift-storage-0\") pod \"a4e533f1-e8eb-4426-906e-35354266d610\" (UID: \"a4e533f1-e8eb-4426-906e-35354266d610\") " Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.606675 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e533f1-e8eb-4426-906e-35354266d610-kube-api-access-rm767" (OuterVolumeSpecName: "kube-api-access-rm767") pod "a4e533f1-e8eb-4426-906e-35354266d610" (UID: "a4e533f1-e8eb-4426-906e-35354266d610"). InnerVolumeSpecName "kube-api-access-rm767". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.649601 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc58d1b4-0d5e-4768-9a82-b6bbcca420a2" path="/var/lib/kubelet/pods/cc58d1b4-0d5e-4768-9a82-b6bbcca420a2/volumes" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.687220 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-config" (OuterVolumeSpecName: "config") pod "a4e533f1-e8eb-4426-906e-35354266d610" (UID: "a4e533f1-e8eb-4426-906e-35354266d610"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.687650 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4e533f1-e8eb-4426-906e-35354266d610" (UID: "a4e533f1-e8eb-4426-906e-35354266d610"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.719544 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.719576 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm767\" (UniqueName: \"kubernetes.io/projected/a4e533f1-e8eb-4426-906e-35354266d610-kube-api-access-rm767\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.719589 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.719878 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4e533f1-e8eb-4426-906e-35354266d610" (UID: "a4e533f1-e8eb-4426-906e-35354266d610"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.759197 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4e533f1-e8eb-4426-906e-35354266d610" (UID: "a4e533f1-e8eb-4426-906e-35354266d610"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.825428 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.825461 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.835394 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4e533f1-e8eb-4426-906e-35354266d610" (UID: "a4e533f1-e8eb-4426-906e-35354266d610"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.848646 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lbcqc"] Jan 29 17:07:54 crc kubenswrapper[4886]: I0129 17:07:54.935921 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e533f1-e8eb-4426-906e-35354266d610-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.154600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7854df7c4b-dn4j7"] Jan 29 17:07:55 crc kubenswrapper[4886]: W0129 17:07:55.173522 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff8b641_0d76_41ce_b6ac_7d708effebc0.slice/crio-e7a3e9e15910d73e70e0b6e954b7743de9f55b25dd0f0bfd34c348eb738633d2 WatchSource:0}: Error finding container e7a3e9e15910d73e70e0b6e954b7743de9f55b25dd0f0bfd34c348eb738633d2: Status 404 returned error can't find the container with id e7a3e9e15910d73e70e0b6e954b7743de9f55b25dd0f0bfd34c348eb738633d2 Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.504677 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3573eaa4-4c27-4747-a691-15ae61d152f3","Type":"ContainerStarted","Data":"53e60943629db0c2467c81d05149376435438eedc3af65a98b6e31b78f97981c"} Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.513786 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.514396 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7854df7c4b-dn4j7" event={"ID":"0ff8b641-0d76-41ce-b6ac-7d708effebc0","Type":"ContainerStarted","Data":"75e8cf0cad7d6d59d88f3f3bd6a97cab33d3691af01126d62cdae48b3d82240f"} Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.514444 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7854df7c4b-dn4j7" event={"ID":"0ff8b641-0d76-41ce-b6ac-7d708effebc0","Type":"ContainerStarted","Data":"e7a3e9e15910d73e70e0b6e954b7743de9f55b25dd0f0bfd34c348eb738633d2"} Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.521503 4886 generic.go:334] "Generic (PLEG): container finished" podID="77e77908-f078-4711-8c40-5e0bbda2a830" containerID="c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a" exitCode=0 Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.523264 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" event={"ID":"77e77908-f078-4711-8c40-5e0bbda2a830","Type":"ContainerDied","Data":"c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a"} Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.523302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" event={"ID":"77e77908-f078-4711-8c40-5e0bbda2a830","Type":"ContainerStarted","Data":"00c8741e78cdef06ac95516aebc006fef061abb10bc976627d894974f2fc0223"} Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.523360 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-dv5ch" Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.587389 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-dv5ch"] Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.597378 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-dv5ch"] Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.743664 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.215:9311/healthcheck\": read tcp 10.217.0.2:52532->10.217.0.215:9311: read: connection reset by peer" Jan 29 17:07:55 crc kubenswrapper[4886]: I0129 17:07:55.744010 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.215:9311/healthcheck\": read tcp 10.217.0.2:52540->10.217.0.215:9311: read: connection reset by peer" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.002611 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-846d49f49c-kc98b"] Jan 29 17:07:56 crc kubenswrapper[4886]: E0129 17:07:56.003455 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e533f1-e8eb-4426-906e-35354266d610" containerName="dnsmasq-dns" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.003474 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e533f1-e8eb-4426-906e-35354266d610" containerName="dnsmasq-dns" Jan 29 17:07:56 crc kubenswrapper[4886]: E0129 17:07:56.003547 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e533f1-e8eb-4426-906e-35354266d610" containerName="init" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.003556 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e533f1-e8eb-4426-906e-35354266d610" containerName="init" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.004069 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e533f1-e8eb-4426-906e-35354266d610" containerName="dnsmasq-dns" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.005463 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.011871 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.012086 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.029391 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-846d49f49c-kc98b"] Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.106720 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-config\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.107587 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-internal-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.107675 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-combined-ca-bundle\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.107703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-public-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.107809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gnk\" (UniqueName: \"kubernetes.io/projected/344feff6-8139-425e-b7dc-f35fe5b17247-kube-api-access-x5gnk\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.107838 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-httpd-config\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.107886 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-ovndb-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.178545 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.220339 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-internal-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.220522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-combined-ca-bundle\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.220584 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-public-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.221627 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gnk\" (UniqueName: \"kubernetes.io/projected/344feff6-8139-425e-b7dc-f35fe5b17247-kube-api-access-x5gnk\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.221671 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-httpd-config\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.221743 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-ovndb-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.221885 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-config\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.277423 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-config\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.281143 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-combined-ca-bundle\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.282020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-httpd-config\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.284953 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-public-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.285039 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-internal-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.285350 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/344feff6-8139-425e-b7dc-f35fe5b17247-ovndb-tls-certs\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.289303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gnk\" (UniqueName: \"kubernetes.io/projected/344feff6-8139-425e-b7dc-f35fe5b17247-kube-api-access-x5gnk\") pod \"neutron-846d49f49c-kc98b\" (UID: \"344feff6-8139-425e-b7dc-f35fe5b17247\") " pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.345285 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.593105 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" event={"ID":"77e77908-f078-4711-8c40-5e0bbda2a830","Type":"ContainerStarted","Data":"53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8"} Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.593595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.600636 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7854df7c4b-dn4j7" event={"ID":"0ff8b641-0d76-41ce-b6ac-7d708effebc0","Type":"ContainerStarted","Data":"f3ee0a56aaca61cef2419de911db690ccd8876c78a545e2b8864e16aa4ff333a"} Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.600960 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.601120 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.614292 4886 generic.go:334] "Generic (PLEG): container finished" podID="ea36feff-2438-49e4-b779-0b083addd0a8" containerID="8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2" exitCode=0 Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.615436 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" event={"ID":"ea36feff-2438-49e4-b779-0b083addd0a8","Type":"ContainerDied","Data":"8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2"} Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.615491 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" event={"ID":"ea36feff-2438-49e4-b779-0b083addd0a8","Type":"ContainerDied","Data":"e9dafe9a7a14455f6d6567489f608749fce9a0af4812468a1f99388ab4f30929"} Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.615512 4886 scope.go:117] "RemoveContainer" containerID="8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.621754 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" podStartSLOduration=3.621729079 podStartE2EDuration="3.621729079s" podCreationTimestamp="2026-01-29 17:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:56.613837836 +0000 UTC m=+2759.522557108" watchObservedRunningTime="2026-01-29 17:07:56.621729079 +0000 UTC m=+2759.530448351" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.635451 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea36feff-2438-49e4-b779-0b083addd0a8-logs\") pod \"ea36feff-2438-49e4-b779-0b083addd0a8\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.635576 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25kn5\" (UniqueName: \"kubernetes.io/projected/ea36feff-2438-49e4-b779-0b083addd0a8-kube-api-access-25kn5\") pod \"ea36feff-2438-49e4-b779-0b083addd0a8\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.635682 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-combined-ca-bundle\") pod \"ea36feff-2438-49e4-b779-0b083addd0a8\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.635748 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e533f1-e8eb-4426-906e-35354266d610" path="/var/lib/kubelet/pods/a4e533f1-e8eb-4426-906e-35354266d610/volumes" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.635989 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data\") pod \"ea36feff-2438-49e4-b779-0b083addd0a8\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.635995 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea36feff-2438-49e4-b779-0b083addd0a8-logs" (OuterVolumeSpecName: "logs") pod "ea36feff-2438-49e4-b779-0b083addd0a8" (UID: "ea36feff-2438-49e4-b779-0b083addd0a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.636013 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data-custom\") pod \"ea36feff-2438-49e4-b779-0b083addd0a8\" (UID: \"ea36feff-2438-49e4-b779-0b083addd0a8\") " Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.643239 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea36feff-2438-49e4-b779-0b083addd0a8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.649509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ea36feff-2438-49e4-b779-0b083addd0a8" (UID: "ea36feff-2438-49e4-b779-0b083addd0a8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.652263 4886 scope.go:117] "RemoveContainer" containerID="f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.652500 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea36feff-2438-49e4-b779-0b083addd0a8-kube-api-access-25kn5" (OuterVolumeSpecName: "kube-api-access-25kn5") pod "ea36feff-2438-49e4-b779-0b083addd0a8" (UID: "ea36feff-2438-49e4-b779-0b083addd0a8"). InnerVolumeSpecName "kube-api-access-25kn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.694844 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7854df7c4b-dn4j7" podStartSLOduration=3.694822057 podStartE2EDuration="3.694822057s" podCreationTimestamp="2026-01-29 17:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:56.666820689 +0000 UTC m=+2759.575539961" watchObservedRunningTime="2026-01-29 17:07:56.694822057 +0000 UTC m=+2759.603541329" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.712973 4886 scope.go:117] "RemoveContainer" containerID="8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2" Jan 29 17:07:56 crc kubenswrapper[4886]: E0129 17:07:56.713417 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2\": container with ID starting with 8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2 not found: ID does not exist" containerID="8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.713454 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2"} err="failed to get container status \"8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2\": rpc error: code = NotFound desc = could not find container \"8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2\": container with ID starting with 8bc4314631c2d889fe7693108f39c4873628c917868bfba6190057b2b09695e2 not found: ID does not exist" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.713480 4886 scope.go:117] "RemoveContainer" containerID="f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5" Jan 29 17:07:56 crc kubenswrapper[4886]: E0129 17:07:56.713672 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5\": container with ID starting with f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5 not found: ID does not exist" containerID="f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.713701 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5"} err="failed to get container status \"f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5\": rpc error: code = NotFound desc = could not find container \"f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5\": container with ID starting with f23c7cc8a8209a15c4be1f866071e7d19219ea178dc6b2496da6cf2510dacfc5 not found: ID does not exist" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.726673 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea36feff-2438-49e4-b779-0b083addd0a8" (UID: "ea36feff-2438-49e4-b779-0b083addd0a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.751658 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.751687 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25kn5\" (UniqueName: \"kubernetes.io/projected/ea36feff-2438-49e4-b779-0b083addd0a8-kube-api-access-25kn5\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.751700 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.764807 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.787436 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data" (OuterVolumeSpecName: "config-data") pod "ea36feff-2438-49e4-b779-0b083addd0a8" (UID: "ea36feff-2438-49e4-b779-0b083addd0a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:56 crc kubenswrapper[4886]: I0129 17:07:56.861860 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea36feff-2438-49e4-b779-0b083addd0a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.136030 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-846d49f49c-kc98b"] Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.649814 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3573eaa4-4c27-4747-a691-15ae61d152f3","Type":"ContainerStarted","Data":"8c944ebd33123a646892f458423259ef498c1ed94b2d49c157cf74c9b8b08797"} Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.650425 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.653404 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f7ff7dd6-jj4jw" Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.656883 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="cinder-scheduler" containerID="cri-o://dd01b92d286ab63ee03bff172b9b03aa69d2a7db780bc4a7761f9cf8e7790134" gracePeriod=30 Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.657031 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-846d49f49c-kc98b" event={"ID":"344feff6-8139-425e-b7dc-f35fe5b17247","Type":"ContainerStarted","Data":"c1a97bc78fc175b0c5ad8818956524b324cc6770550b8a275346ef7d541fd8eb"} Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.657104 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-846d49f49c-kc98b" event={"ID":"344feff6-8139-425e-b7dc-f35fe5b17247","Type":"ContainerStarted","Data":"1e02866a9505d80313275c6450c11906a9e90c56b2c3f33739805d8c22dbd4ce"} Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.657121 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-846d49f49c-kc98b" event={"ID":"344feff6-8139-425e-b7dc-f35fe5b17247","Type":"ContainerStarted","Data":"370631686ddc7aba98f4a6b4634378cfb8acd9271e8e45abb240c119128e8252"} Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.657246 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="probe" containerID="cri-o://3d38ab3f39b8f10e80b68dcbf56b94dd2483224e667fea1a1a75ada7c0ecf901" gracePeriod=30 Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.657707 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.679593 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.679566964 podStartE2EDuration="5.679566964s" podCreationTimestamp="2026-01-29 17:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:57.669917672 +0000 UTC m=+2760.578636944" watchObservedRunningTime="2026-01-29 17:07:57.679566964 +0000 UTC m=+2760.588286236" Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.702446 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-846d49f49c-kc98b" podStartSLOduration=2.702429038 podStartE2EDuration="2.702429038s" podCreationTimestamp="2026-01-29 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:57.700867884 +0000 UTC m=+2760.609587156" watchObservedRunningTime="2026-01-29 17:07:57.702429038 +0000 UTC m=+2760.611148330" Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.725598 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55f7ff7dd6-jj4jw"] Jan 29 17:07:57 crc kubenswrapper[4886]: I0129 17:07:57.822192 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55f7ff7dd6-jj4jw"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.347894 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-54f8bbfbf-9qjxm"] Jan 29 17:07:58 crc kubenswrapper[4886]: E0129 17:07:58.348588 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api-log" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.348605 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api-log" Jan 29 17:07:58 crc kubenswrapper[4886]: E0129 17:07:58.348635 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.348642 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.350099 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.350143 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" containerName="barbican-api-log" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.351002 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.357300 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.357582 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.357748 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-658st" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.375066 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54f8bbfbf-9qjxm"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.411770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data-custom\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.411858 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn4rg\" (UniqueName: \"kubernetes.io/projected/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-kube-api-access-bn4rg\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.411877 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-combined-ca-bundle\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.411916 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.511782 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lbcqc"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.514105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data-custom\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.514190 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn4rg\" (UniqueName: \"kubernetes.io/projected/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-kube-api-access-bn4rg\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.514208 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-combined-ca-bundle\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.514244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.529221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data-custom\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.548643 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.549024 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-combined-ca-bundle\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.554372 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-btn45"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.564157 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.574838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn4rg\" (UniqueName: \"kubernetes.io/projected/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-kube-api-access-bn4rg\") pod \"heat-engine-54f8bbfbf-9qjxm\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.581661 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f6c4bddd6-xqtdm"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.583112 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.596759 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.622855 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-557f889856-kwzsw"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.639282 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-btn45"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.641234 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.650146 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.698141 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea36feff-2438-49e4-b779-0b083addd0a8" path="/var/lib/kubelet/pods/ea36feff-2438-49e4-b779-0b083addd0a8/volumes" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.699211 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f6c4bddd6-xqtdm"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.699241 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-658st" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.706458 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" containerName="dnsmasq-dns" containerID="cri-o://53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8" gracePeriod=10 Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.707122 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733380 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733433 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6k7c\" (UniqueName: \"kubernetes.io/projected/da76d93d-7c2d-485e-b5e0-229f4254d74b-kube-api-access-m6k7c\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-config\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733567 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733581 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-combined-ca-bundle\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733824 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733897 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data-custom\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data-custom\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.733960 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhn24\" (UniqueName: \"kubernetes.io/projected/3fa8d357-cef3-43d1-8338-386d9880bb82-kube-api-access-xhn24\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.734005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khr6q\" (UniqueName: \"kubernetes.io/projected/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-kube-api-access-khr6q\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.734037 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-combined-ca-bundle\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.780938 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-557f889856-kwzsw"] Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836038 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data-custom\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836075 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836104 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data-custom\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhn24\" (UniqueName: \"kubernetes.io/projected/3fa8d357-cef3-43d1-8338-386d9880bb82-kube-api-access-xhn24\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836167 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khr6q\" (UniqueName: \"kubernetes.io/projected/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-kube-api-access-khr6q\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836199 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-combined-ca-bundle\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836300 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6k7c\" (UniqueName: \"kubernetes.io/projected/da76d93d-7c2d-485e-b5e0-229f4254d74b-kube-api-access-m6k7c\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836338 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-config\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836468 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836521 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836536 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836560 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-combined-ca-bundle\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.836629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.837425 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.839400 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-config\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.843500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.846198 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.847997 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.849187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.851682 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-combined-ca-bundle\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.860302 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data-custom\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.861112 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data-custom\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.861265 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.869524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-combined-ca-bundle\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.873465 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhn24\" (UniqueName: \"kubernetes.io/projected/3fa8d357-cef3-43d1-8338-386d9880bb82-kube-api-access-xhn24\") pod \"heat-api-557f889856-kwzsw\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.890541 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khr6q\" (UniqueName: \"kubernetes.io/projected/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-kube-api-access-khr6q\") pod \"heat-cfnapi-6f6c4bddd6-xqtdm\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.891513 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6k7c\" (UniqueName: \"kubernetes.io/projected/da76d93d-7c2d-485e-b5e0-229f4254d74b-kube-api-access-m6k7c\") pod \"dnsmasq-dns-7756b9d78c-btn45\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:58 crc kubenswrapper[4886]: I0129 17:07:58.996009 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.032410 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.100429 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.510496 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.583599 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-svc\") pod \"77e77908-f078-4711-8c40-5e0bbda2a830\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.583752 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rwhj\" (UniqueName: \"kubernetes.io/projected/77e77908-f078-4711-8c40-5e0bbda2a830-kube-api-access-6rwhj\") pod \"77e77908-f078-4711-8c40-5e0bbda2a830\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.583810 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-nb\") pod \"77e77908-f078-4711-8c40-5e0bbda2a830\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.583889 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-sb\") pod \"77e77908-f078-4711-8c40-5e0bbda2a830\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.583944 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-swift-storage-0\") pod \"77e77908-f078-4711-8c40-5e0bbda2a830\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.584053 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-config\") pod \"77e77908-f078-4711-8c40-5e0bbda2a830\" (UID: \"77e77908-f078-4711-8c40-5e0bbda2a830\") " Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.600639 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e77908-f078-4711-8c40-5e0bbda2a830-kube-api-access-6rwhj" (OuterVolumeSpecName: "kube-api-access-6rwhj") pod "77e77908-f078-4711-8c40-5e0bbda2a830" (UID: "77e77908-f078-4711-8c40-5e0bbda2a830"). InnerVolumeSpecName "kube-api-access-6rwhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.604763 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rwhj\" (UniqueName: \"kubernetes.io/projected/77e77908-f078-4711-8c40-5e0bbda2a830-kube-api-access-6rwhj\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.663518 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.663829 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.721093 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-config" (OuterVolumeSpecName: "config") pod "77e77908-f078-4711-8c40-5e0bbda2a830" (UID: "77e77908-f078-4711-8c40-5e0bbda2a830"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.733950 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "77e77908-f078-4711-8c40-5e0bbda2a830" (UID: "77e77908-f078-4711-8c40-5e0bbda2a830"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.750511 4886 generic.go:334] "Generic (PLEG): container finished" podID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerID="3d38ab3f39b8f10e80b68dcbf56b94dd2483224e667fea1a1a75ada7c0ecf901" exitCode=0 Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.750548 4886 generic.go:334] "Generic (PLEG): container finished" podID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerID="dd01b92d286ab63ee03bff172b9b03aa69d2a7db780bc4a7761f9cf8e7790134" exitCode=0 Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.750606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79744cfd-ecdc-42c4-b70e-bb957640a11c","Type":"ContainerDied","Data":"3d38ab3f39b8f10e80b68dcbf56b94dd2483224e667fea1a1a75ada7c0ecf901"} Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.750641 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79744cfd-ecdc-42c4-b70e-bb957640a11c","Type":"ContainerDied","Data":"dd01b92d286ab63ee03bff172b9b03aa69d2a7db780bc4a7761f9cf8e7790134"} Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.760338 4886 generic.go:334] "Generic (PLEG): container finished" podID="77e77908-f078-4711-8c40-5e0bbda2a830" containerID="53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8" exitCode=0 Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.760397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" event={"ID":"77e77908-f078-4711-8c40-5e0bbda2a830","Type":"ContainerDied","Data":"53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8"} Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.760428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" event={"ID":"77e77908-f078-4711-8c40-5e0bbda2a830","Type":"ContainerDied","Data":"00c8741e78cdef06ac95516aebc006fef061abb10bc976627d894974f2fc0223"} Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.760449 4886 scope.go:117] "RemoveContainer" containerID="53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.760623 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lbcqc" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.762265 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "77e77908-f078-4711-8c40-5e0bbda2a830" (UID: "77e77908-f078-4711-8c40-5e0bbda2a830"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.763959 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "77e77908-f078-4711-8c40-5e0bbda2a830" (UID: "77e77908-f078-4711-8c40-5e0bbda2a830"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.772589 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "77e77908-f078-4711-8c40-5e0bbda2a830" (UID: "77e77908-f078-4711-8c40-5e0bbda2a830"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.809141 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.809168 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.809178 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.809187 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.809195 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77e77908-f078-4711-8c40-5e0bbda2a830-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.832798 4886 scope.go:117] "RemoveContainer" containerID="c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.884804 4886 scope.go:117] "RemoveContainer" containerID="53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8" Jan 29 17:07:59 crc kubenswrapper[4886]: E0129 17:07:59.885257 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8\": container with ID starting with 53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8 not found: ID does not exist" containerID="53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.885289 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8"} err="failed to get container status \"53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8\": rpc error: code = NotFound desc = could not find container \"53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8\": container with ID starting with 53ca240c0a66f67f4b44ce143c7902f3cc1ddf7f2d59ac9c55d73990e13de5e8 not found: ID does not exist" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.885309 4886 scope.go:117] "RemoveContainer" containerID="c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a" Jan 29 17:07:59 crc kubenswrapper[4886]: E0129 17:07:59.888862 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a\": container with ID starting with c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a not found: ID does not exist" containerID="c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a" Jan 29 17:07:59 crc kubenswrapper[4886]: I0129 17:07:59.888888 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a"} err="failed to get container status \"c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a\": rpc error: code = NotFound desc = could not find container \"c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a\": container with ID starting with c105784d4cb4a65b24766afa5c392562f921a5e8ba938bcdad19639f8052e82a not found: ID does not exist" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.054623 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.065895 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54f8bbfbf-9qjxm"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.209140 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lbcqc"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.224121 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79744cfd-ecdc-42c4-b70e-bb957640a11c-etc-machine-id\") pod \"79744cfd-ecdc-42c4-b70e-bb957640a11c\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.224272 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79744cfd-ecdc-42c4-b70e-bb957640a11c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "79744cfd-ecdc-42c4-b70e-bb957640a11c" (UID: "79744cfd-ecdc-42c4-b70e-bb957640a11c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.224385 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data-custom\") pod \"79744cfd-ecdc-42c4-b70e-bb957640a11c\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.224440 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-scripts\") pod \"79744cfd-ecdc-42c4-b70e-bb957640a11c\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.225266 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-combined-ca-bundle\") pod \"79744cfd-ecdc-42c4-b70e-bb957640a11c\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.225316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data\") pod \"79744cfd-ecdc-42c4-b70e-bb957640a11c\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.225408 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrzlm\" (UniqueName: \"kubernetes.io/projected/79744cfd-ecdc-42c4-b70e-bb957640a11c-kube-api-access-zrzlm\") pod \"79744cfd-ecdc-42c4-b70e-bb957640a11c\" (UID: \"79744cfd-ecdc-42c4-b70e-bb957640a11c\") " Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.226139 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79744cfd-ecdc-42c4-b70e-bb957640a11c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.236605 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79744cfd-ecdc-42c4-b70e-bb957640a11c-kube-api-access-zrzlm" (OuterVolumeSpecName: "kube-api-access-zrzlm") pod "79744cfd-ecdc-42c4-b70e-bb957640a11c" (UID: "79744cfd-ecdc-42c4-b70e-bb957640a11c"). InnerVolumeSpecName "kube-api-access-zrzlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.236720 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-scripts" (OuterVolumeSpecName: "scripts") pod "79744cfd-ecdc-42c4-b70e-bb957640a11c" (UID: "79744cfd-ecdc-42c4-b70e-bb957640a11c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.239962 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79744cfd-ecdc-42c4-b70e-bb957640a11c" (UID: "79744cfd-ecdc-42c4-b70e-bb957640a11c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.257020 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lbcqc"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.327870 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.327901 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.327911 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrzlm\" (UniqueName: \"kubernetes.io/projected/79744cfd-ecdc-42c4-b70e-bb957640a11c-kube-api-access-zrzlm\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.342556 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79744cfd-ecdc-42c4-b70e-bb957640a11c" (UID: "79744cfd-ecdc-42c4-b70e-bb957640a11c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.429944 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.492469 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data" (OuterVolumeSpecName: "config-data") pod "79744cfd-ecdc-42c4-b70e-bb957640a11c" (UID: "79744cfd-ecdc-42c4-b70e-bb957640a11c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.531735 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79744cfd-ecdc-42c4-b70e-bb957640a11c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.649910 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" path="/var/lib/kubelet/pods/77e77908-f078-4711-8c40-5e0bbda2a830/volumes" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.675307 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-557f889856-kwzsw"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.705502 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f6c4bddd6-xqtdm"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.740409 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-btn45"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.783736 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.784396 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="proxy-httpd" containerID="cri-o://44a3542db94b31c96db714bd6c3559bd3e1d7d7a66d633f86abe33fb9a6f4bd0" gracePeriod=30 Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.784771 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="sg-core" containerID="cri-o://9d8e62602d1305f37f8a51b73f2c104ca86a67a3331fc3d826d42ccf0fac24ce" gracePeriod=30 Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.784821 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-notification-agent" containerID="cri-o://1bdf46565ca1048aaf33d2e55676cc44132df701332d9cac871024cf7e0601b1" gracePeriod=30 Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.784741 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-central-agent" containerID="cri-o://472df94bcf2c9160f704fb8f0e7681c07c27ea44d994460b0bfef6434e9a5bfa" gracePeriod=30 Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.812694 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.812717 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79744cfd-ecdc-42c4-b70e-bb957640a11c","Type":"ContainerDied","Data":"eb5bacab0ef6b5257f3ba5127165c9496314e35a73af62c8e260a0b9866372e0"} Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.812809 4886 scope.go:117] "RemoveContainer" containerID="3d38ab3f39b8f10e80b68dcbf56b94dd2483224e667fea1a1a75ada7c0ecf901" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.821680 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" event={"ID":"da0e4cf4-a01f-48df-b61b-796c8bc9f60a","Type":"ContainerStarted","Data":"349855b0bf0483b72492372d5c1a6d697a135a4af893483f84d1a5f6df2c5a62"} Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.824359 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" event={"ID":"da76d93d-7c2d-485e-b5e0-229f4254d74b","Type":"ContainerStarted","Data":"bfc495e69c05d32911e1c19e2fff095c3d4fca06c566554a8f30f63272e3f284"} Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.826944 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54f8bbfbf-9qjxm" event={"ID":"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f","Type":"ContainerStarted","Data":"b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f"} Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.826972 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54f8bbfbf-9qjxm" event={"ID":"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f","Type":"ContainerStarted","Data":"0f319e6982b89bee08a0388a5eb4c63bb973328dc67504ccea174e9928171156"} Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.828202 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.840598 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f889856-kwzsw" event={"ID":"3fa8d357-cef3-43d1-8338-386d9880bb82","Type":"ContainerStarted","Data":"8e93f8d9b007e6405d2291aa2ff9660432275194b991846ebc2d8ccfab880ce5"} Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.871896 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.888619 4886 scope.go:117] "RemoveContainer" containerID="dd01b92d286ab63ee03bff172b9b03aa69d2a7db780bc4a7761f9cf8e7790134" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.918276 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:08:00 crc kubenswrapper[4886]: E0129 17:08:00.928781 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79744cfd_ecdc_42c4_b70e_bb957640a11c.slice/crio-eb5bacab0ef6b5257f3ba5127165c9496314e35a73af62c8e260a0b9866372e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79744cfd_ecdc_42c4_b70e_bb957640a11c.slice\": RecentStats: unable to find data in memory cache]" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.935289 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:08:00 crc kubenswrapper[4886]: E0129 17:08:00.935818 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" containerName="dnsmasq-dns" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.935842 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" containerName="dnsmasq-dns" Jan 29 17:08:00 crc kubenswrapper[4886]: E0129 17:08:00.935874 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="cinder-scheduler" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.935883 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="cinder-scheduler" Jan 29 17:08:00 crc kubenswrapper[4886]: E0129 17:08:00.935899 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="probe" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.935907 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="probe" Jan 29 17:08:00 crc kubenswrapper[4886]: E0129 17:08:00.935923 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" containerName="init" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.935929 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" containerName="init" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.936129 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="probe" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.936149 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e77908-f078-4711-8c40-5e0bbda2a830" containerName="dnsmasq-dns" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.936159 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" containerName="cinder-scheduler" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.937558 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.953406 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-54f8bbfbf-9qjxm" podStartSLOduration=2.953389363 podStartE2EDuration="2.953389363s" podCreationTimestamp="2026-01-29 17:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:00.86809209 +0000 UTC m=+2763.776811352" watchObservedRunningTime="2026-01-29 17:08:00.953389363 +0000 UTC m=+2763.862108635" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.962722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 17:08:00 crc kubenswrapper[4886]: I0129 17:08:00.968422 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.057160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.057525 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.057663 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.057735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9b55479-5ea1-4a5b-9e34-e83313b04dec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.057813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.058573 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2z69\" (UniqueName: \"kubernetes.io/projected/d9b55479-5ea1-4a5b-9e34-e83313b04dec-kube-api-access-s2z69\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.164133 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.164215 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.164247 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9b55479-5ea1-4a5b-9e34-e83313b04dec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.164275 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.164305 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2z69\" (UniqueName: \"kubernetes.io/projected/d9b55479-5ea1-4a5b-9e34-e83313b04dec-kube-api-access-s2z69\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.164450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.165248 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9b55479-5ea1-4a5b-9e34-e83313b04dec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.171088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.172631 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.173842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.183848 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2z69\" (UniqueName: \"kubernetes.io/projected/d9b55479-5ea1-4a5b-9e34-e83313b04dec-kube-api-access-s2z69\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.193704 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9b55479-5ea1-4a5b-9e34-e83313b04dec-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9b55479-5ea1-4a5b-9e34-e83313b04dec\") " pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.324514 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.810468 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886618 4886 generic.go:334] "Generic (PLEG): container finished" podID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerID="44a3542db94b31c96db714bd6c3559bd3e1d7d7a66d633f86abe33fb9a6f4bd0" exitCode=0 Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886651 4886 generic.go:334] "Generic (PLEG): container finished" podID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerID="9d8e62602d1305f37f8a51b73f2c104ca86a67a3331fc3d826d42ccf0fac24ce" exitCode=2 Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886660 4886 generic.go:334] "Generic (PLEG): container finished" podID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerID="1bdf46565ca1048aaf33d2e55676cc44132df701332d9cac871024cf7e0601b1" exitCode=0 Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886667 4886 generic.go:334] "Generic (PLEG): container finished" podID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerID="472df94bcf2c9160f704fb8f0e7681c07c27ea44d994460b0bfef6434e9a5bfa" exitCode=0 Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerDied","Data":"44a3542db94b31c96db714bd6c3559bd3e1d7d7a66d633f86abe33fb9a6f4bd0"} Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886746 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerDied","Data":"9d8e62602d1305f37f8a51b73f2c104ca86a67a3331fc3d826d42ccf0fac24ce"} Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886758 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerDied","Data":"1bdf46565ca1048aaf33d2e55676cc44132df701332d9cac871024cf7e0601b1"} Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.886768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerDied","Data":"472df94bcf2c9160f704fb8f0e7681c07c27ea44d994460b0bfef6434e9a5bfa"} Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.893271 4886 generic.go:334] "Generic (PLEG): container finished" podID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerID="aecb755c349be6f445700545d32b2d2a1cceeb8e44ce0b32e7f93655d8a60679" exitCode=0 Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.893389 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" event={"ID":"da76d93d-7c2d-485e-b5e0-229f4254d74b","Type":"ContainerDied","Data":"aecb755c349be6f445700545d32b2d2a1cceeb8e44ce0b32e7f93655d8a60679"} Jan 29 17:08:01 crc kubenswrapper[4886]: I0129 17:08:01.910766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9b55479-5ea1-4a5b-9e34-e83313b04dec","Type":"ContainerStarted","Data":"90b34c7a69776956d3b5a18587107f777be1a70596c9cd0c0def826fbc244baa"} Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.161686 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f458794ff-v7p92"] Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.179313 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f458794ff-v7p92"] Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.179442 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.190995 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.191557 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.191914 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227062 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7kr\" (UniqueName: \"kubernetes.io/projected/79c81ef9-65c7-4372-9a47-8ed93521eadf-kube-api-access-sc7kr\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227108 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/79c81ef9-65c7-4372-9a47-8ed93521eadf-etc-swift\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227196 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-combined-ca-bundle\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79c81ef9-65c7-4372-9a47-8ed93521eadf-log-httpd\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227252 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-config-data\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227313 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-internal-tls-certs\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79c81ef9-65c7-4372-9a47-8ed93521eadf-run-httpd\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.227371 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-public-tls-certs\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.332268 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7kr\" (UniqueName: \"kubernetes.io/projected/79c81ef9-65c7-4372-9a47-8ed93521eadf-kube-api-access-sc7kr\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.332610 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/79c81ef9-65c7-4372-9a47-8ed93521eadf-etc-swift\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.338996 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-combined-ca-bundle\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.339081 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79c81ef9-65c7-4372-9a47-8ed93521eadf-log-httpd\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.339146 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-config-data\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.339436 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-internal-tls-certs\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.339510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79c81ef9-65c7-4372-9a47-8ed93521eadf-run-httpd\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.339546 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-public-tls-certs\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.339895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79c81ef9-65c7-4372-9a47-8ed93521eadf-log-httpd\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.344093 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-combined-ca-bundle\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.344315 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79c81ef9-65c7-4372-9a47-8ed93521eadf-run-httpd\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.352277 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/79c81ef9-65c7-4372-9a47-8ed93521eadf-etc-swift\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.354464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-public-tls-certs\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.354940 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-config-data\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.356929 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7kr\" (UniqueName: \"kubernetes.io/projected/79c81ef9-65c7-4372-9a47-8ed93521eadf-kube-api-access-sc7kr\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.359303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81ef9-65c7-4372-9a47-8ed93521eadf-internal-tls-certs\") pod \"swift-proxy-f458794ff-v7p92\" (UID: \"79c81ef9-65c7-4372-9a47-8ed93521eadf\") " pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.368451 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.449763 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kkf6\" (UniqueName: \"kubernetes.io/projected/24e9fd03-4a7f-45c7-83e6-608ad7648766-kube-api-access-5kkf6\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.449833 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-run-httpd\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.450122 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-log-httpd\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.450300 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-sg-core-conf-yaml\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.450384 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-combined-ca-bundle\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.450420 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-config-data\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.450437 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-scripts\") pod \"24e9fd03-4a7f-45c7-83e6-608ad7648766\" (UID: \"24e9fd03-4a7f-45c7-83e6-608ad7648766\") " Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.454163 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.455627 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.460713 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e9fd03-4a7f-45c7-83e6-608ad7648766-kube-api-access-5kkf6" (OuterVolumeSpecName: "kube-api-access-5kkf6") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "kube-api-access-5kkf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.484696 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-scripts" (OuterVolumeSpecName: "scripts") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.533881 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.586477 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.586524 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kkf6\" (UniqueName: \"kubernetes.io/projected/24e9fd03-4a7f-45c7-83e6-608ad7648766-kube-api-access-5kkf6\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.586535 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.586543 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24e9fd03-4a7f-45c7-83e6-608ad7648766-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.658499 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.691487 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79744cfd-ecdc-42c4-b70e-bb957640a11c" path="/var/lib/kubelet/pods/79744cfd-ecdc-42c4-b70e-bb957640a11c/volumes" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.692377 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.780610 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.793837 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.838138 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-config-data" (OuterVolumeSpecName: "config-data") pod "24e9fd03-4a7f-45c7-83e6-608ad7648766" (UID: "24e9fd03-4a7f-45c7-83e6-608ad7648766"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.895429 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e9fd03-4a7f-45c7-83e6-608ad7648766-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.950784 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" event={"ID":"da76d93d-7c2d-485e-b5e0-229f4254d74b","Type":"ContainerStarted","Data":"d9ab37d44f372064ee89522913b27477d9c2a6f3f0efeec33809e585d943fe38"} Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.951151 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.965947 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.966793 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24e9fd03-4a7f-45c7-83e6-608ad7648766","Type":"ContainerDied","Data":"92751cfdf549c65a3a37a865694b9ce91879a5f41c663c775080337b3acc7481"} Jan 29 17:08:02 crc kubenswrapper[4886]: I0129 17:08:02.966846 4886 scope.go:117] "RemoveContainer" containerID="44a3542db94b31c96db714bd6c3559bd3e1d7d7a66d633f86abe33fb9a6f4bd0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.010092 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" podStartSLOduration=5.01007263 podStartE2EDuration="5.01007263s" podCreationTimestamp="2026-01-29 17:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:02.979228672 +0000 UTC m=+2765.887947934" watchObservedRunningTime="2026-01-29 17:08:03.01007263 +0000 UTC m=+2765.918791902" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.011744 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.032029 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.076568 4886 scope.go:117] "RemoveContainer" containerID="9d8e62602d1305f37f8a51b73f2c104ca86a67a3331fc3d826d42ccf0fac24ce" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.094417 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:03 crc kubenswrapper[4886]: E0129 17:08:03.104728 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="proxy-httpd" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.104775 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="proxy-httpd" Jan 29 17:08:03 crc kubenswrapper[4886]: E0129 17:08:03.104832 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-notification-agent" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.104841 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-notification-agent" Jan 29 17:08:03 crc kubenswrapper[4886]: E0129 17:08:03.104880 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-central-agent" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.104887 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-central-agent" Jan 29 17:08:03 crc kubenswrapper[4886]: E0129 17:08:03.104912 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="sg-core" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.104919 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="sg-core" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.105576 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="sg-core" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.105616 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-central-agent" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.105643 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="ceilometer-notification-agent" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.105653 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" containerName="proxy-httpd" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.110319 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.113783 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.114171 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.130753 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.167231 4886 scope.go:117] "RemoveContainer" containerID="1bdf46565ca1048aaf33d2e55676cc44132df701332d9cac871024cf7e0601b1" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.202962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8hq\" (UniqueName: \"kubernetes.io/projected/e0ea79fe-a2e5-4861-be91-aba220b1b221-kube-api-access-rt8hq\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.203060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-run-httpd\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.203189 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-scripts\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.203270 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.203378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.203439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-log-httpd\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.203498 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-config-data\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.262917 4886 scope.go:117] "RemoveContainer" containerID="472df94bcf2c9160f704fb8f0e7681c07c27ea44d994460b0bfef6434e9a5bfa" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309682 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-run-httpd\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309762 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-scripts\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309856 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309876 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-log-httpd\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309897 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-config-data\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.309977 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8hq\" (UniqueName: \"kubernetes.io/projected/e0ea79fe-a2e5-4861-be91-aba220b1b221-kube-api-access-rt8hq\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.310777 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-run-httpd\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.316004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.325573 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-scripts\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.325808 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-log-httpd\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.335128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8hq\" (UniqueName: \"kubernetes.io/projected/e0ea79fe-a2e5-4861-be91-aba220b1b221-kube-api-access-rt8hq\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.335964 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.357686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-config-data\") pod \"ceilometer-0\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.359855 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f458794ff-v7p92"] Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.496813 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:03 crc kubenswrapper[4886]: I0129 17:08:03.992054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9b55479-5ea1-4a5b-9e34-e83313b04dec","Type":"ContainerStarted","Data":"80305faab9c62eace7d4c1bdb3bb280453207a39ecf367613ce2d312e44454f2"} Jan 29 17:08:04 crc kubenswrapper[4886]: I0129 17:08:04.011337 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f458794ff-v7p92" event={"ID":"79c81ef9-65c7-4372-9a47-8ed93521eadf","Type":"ContainerStarted","Data":"5dbd6462c80bc5cade9d736da39f17d5f27d4a0e06bee0ed49ba5fb78b9bb1e7"} Jan 29 17:08:04 crc kubenswrapper[4886]: I0129 17:08:04.286827 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:04 crc kubenswrapper[4886]: I0129 17:08:04.644850 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e9fd03-4a7f-45c7-83e6-608ad7648766" path="/var/lib/kubelet/pods/24e9fd03-4a7f-45c7-83e6-608ad7648766/volumes" Jan 29 17:08:05 crc kubenswrapper[4886]: I0129 17:08:05.034304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f458794ff-v7p92" event={"ID":"79c81ef9-65c7-4372-9a47-8ed93521eadf","Type":"ContainerStarted","Data":"b04a5dbfb771cedc564c98fd3551b8ad5346c3b7c7de45d6fa5e9ae368e761db"} Jan 29 17:08:05 crc kubenswrapper[4886]: I0129 17:08:05.049442 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9b55479-5ea1-4a5b-9e34-e83313b04dec","Type":"ContainerStarted","Data":"4d77970ac02df85f6db6ea041b1b14f3281f397dd1d73b477c3ccbbd864b1c13"} Jan 29 17:08:05 crc kubenswrapper[4886]: I0129 17:08:05.089457 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.089436317 podStartE2EDuration="5.089436317s" podCreationTimestamp="2026-01-29 17:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:05.075708741 +0000 UTC m=+2767.984428023" watchObservedRunningTime="2026-01-29 17:08:05.089436317 +0000 UTC m=+2767.998155579" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.169994 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.325546 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 17:08:06 crc kubenswrapper[4886]: W0129 17:08:06.576172 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0ea79fe_a2e5_4861_be91_aba220b1b221.slice/crio-928834e62ea2e840bea0af8f378a7be863b8582e831ecb530090b696cd7380b1 WatchSource:0}: Error finding container 928834e62ea2e840bea0af8f378a7be863b8582e831ecb530090b696cd7380b1: Status 404 returned error can't find the container with id 928834e62ea2e840bea0af8f378a7be863b8582e831ecb530090b696cd7380b1 Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.767569 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5f6fd667fd-4s5hk"] Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.769987 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.799027 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54985c87ff-g5725"] Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.801163 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.828209 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f6fd667fd-4s5hk"] Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.845403 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqs6\" (UniqueName: \"kubernetes.io/projected/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-kube-api-access-8wqs6\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.845499 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-config-data\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.845582 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-combined-ca-bundle\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.845629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-config-data-custom\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.865385 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54985c87ff-g5725"] Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.885381 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c7bddd46c-bnlxj"] Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.887095 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.934903 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c7bddd46c-bnlxj"] Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.952496 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.954569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.954600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data-custom\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.954649 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-combined-ca-bundle\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.954731 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb8rb\" (UniqueName: \"kubernetes.io/projected/04a4a757-71c6-46ec-9019-8d2f64be8285-kube-api-access-bb8rb\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.954757 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-config-data-custom\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.954942 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqs6\" (UniqueName: \"kubernetes.io/projected/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-kube-api-access-8wqs6\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.955011 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.955065 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data-custom\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.955143 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6cjb\" (UniqueName: \"kubernetes.io/projected/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-kube-api-access-p6cjb\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.955172 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-config-data\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.955205 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-combined-ca-bundle\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.978985 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-config-data-custom\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.979905 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-combined-ca-bundle\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:06 crc kubenswrapper[4886]: I0129 17:08:06.981109 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-config-data\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.033801 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqs6\" (UniqueName: \"kubernetes.io/projected/3b8fde91-2520-41c6-bc79-1f6b186dcbf0-kube-api-access-8wqs6\") pod \"heat-engine-5f6fd667fd-4s5hk\" (UID: \"3b8fde91-2520-41c6-bc79-1f6b186dcbf0\") " pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.060469 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data-custom\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.060673 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb8rb\" (UniqueName: \"kubernetes.io/projected/04a4a757-71c6-46ec-9019-8d2f64be8285-kube-api-access-bb8rb\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066187 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data-custom\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066391 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6cjb\" (UniqueName: \"kubernetes.io/projected/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-kube-api-access-p6cjb\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-combined-ca-bundle\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066615 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066678 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.066986 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data-custom\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.085024 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data-custom\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.085919 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.086690 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.092378 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-combined-ca-bundle\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.093054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.094407 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerStarted","Data":"928834e62ea2e840bea0af8f378a7be863b8582e831ecb530090b696cd7380b1"} Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.097606 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb8rb\" (UniqueName: \"kubernetes.io/projected/04a4a757-71c6-46ec-9019-8d2f64be8285-kube-api-access-bb8rb\") pod \"heat-cfnapi-54985c87ff-g5725\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.105152 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6cjb\" (UniqueName: \"kubernetes.io/projected/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-kube-api-access-p6cjb\") pod \"heat-api-6c7bddd46c-bnlxj\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.147677 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.224017 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:07 crc kubenswrapper[4886]: I0129 17:08:07.235148 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:07.738841 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f6fd667fd-4s5hk"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.146092 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" event={"ID":"da0e4cf4-a01f-48df-b61b-796c8bc9f60a","Type":"ContainerStarted","Data":"43336df2fcaf1b7acdf86423e30be9a3f4bd5a0f8198c273d550486720809b18"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.146455 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.171263 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f458794ff-v7p92" event={"ID":"79c81ef9-65c7-4372-9a47-8ed93521eadf","Type":"ContainerStarted","Data":"d13099f58927242dabf2518b9f0c1ef06941bb2bf99961324b02014accac3771"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.172554 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.172584 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.175237 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerStarted","Data":"5d0ddc2798e73cd33929ee945c72ef848dc6759a75fd9fcc95c2f939f265b877"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.196251 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f6fd667fd-4s5hk" event={"ID":"3b8fde91-2520-41c6-bc79-1f6b186dcbf0","Type":"ContainerStarted","Data":"20d13db972e656bc190d452afe9dd4ec56d5a39d7d01657e5c9f210465635685"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.196285 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f6fd667fd-4s5hk" event={"ID":"3b8fde91-2520-41c6-bc79-1f6b186dcbf0","Type":"ContainerStarted","Data":"e182f1bb7108c6a8e580c33036a302e948ed4477844a9f9bc581fc486d65f70b"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.197000 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.218296 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" podStartSLOduration=4.215341836 podStartE2EDuration="10.218258022s" podCreationTimestamp="2026-01-29 17:07:58 +0000 UTC" firstStartedPulling="2026-01-29 17:08:00.705976474 +0000 UTC m=+2763.614695746" lastFinishedPulling="2026-01-29 17:08:06.70889266 +0000 UTC m=+2769.617611932" observedRunningTime="2026-01-29 17:08:08.174255773 +0000 UTC m=+2771.082975065" watchObservedRunningTime="2026-01-29 17:08:08.218258022 +0000 UTC m=+2771.126977284" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.218901 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f889856-kwzsw" event={"ID":"3fa8d357-cef3-43d1-8338-386d9880bb82","Type":"ContainerStarted","Data":"69b0f3248bd2be75d1851a0e7878c496c05c0ca2dacd1bbce93fad67d36c48ff"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.220011 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.240450 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f458794ff-v7p92" podStartSLOduration=6.240420366 podStartE2EDuration="6.240420366s" podCreationTimestamp="2026-01-29 17:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:08.210219046 +0000 UTC m=+2771.118938318" watchObservedRunningTime="2026-01-29 17:08:08.240420366 +0000 UTC m=+2771.149139638" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.262298 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5f6fd667fd-4s5hk" podStartSLOduration=2.262270732 podStartE2EDuration="2.262270732s" podCreationTimestamp="2026-01-29 17:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:08.225232589 +0000 UTC m=+2771.133951881" watchObservedRunningTime="2026-01-29 17:08:08.262270732 +0000 UTC m=+2771.170990004" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.294000 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-557f889856-kwzsw" podStartSLOduration=4.273907037 podStartE2EDuration="10.293976885s" podCreationTimestamp="2026-01-29 17:07:58 +0000 UTC" firstStartedPulling="2026-01-29 17:08:00.687602307 +0000 UTC m=+2763.596321579" lastFinishedPulling="2026-01-29 17:08:06.707672155 +0000 UTC m=+2769.616391427" observedRunningTime="2026-01-29 17:08:08.241855407 +0000 UTC m=+2771.150574689" watchObservedRunningTime="2026-01-29 17:08:08.293976885 +0000 UTC m=+2771.202696157" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:08.998623 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.075712 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-96hn8"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.075970 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="dnsmasq-dns" containerID="cri-o://705da8d91cb45e05b6aa5ab5b116ce8252bf3f498078113a7eee5edc1d206bca" gracePeriod=10 Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.160903 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f6c4bddd6-xqtdm"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.190557 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c65449fdf-42rxg"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.209117 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.213579 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c65449fdf-42rxg"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.217294 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.217488 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.284723 4886 generic.go:334] "Generic (PLEG): container finished" podID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerID="705da8d91cb45e05b6aa5ab5b116ce8252bf3f498078113a7eee5edc1d206bca" exitCode=0 Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.285835 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" event={"ID":"80d171a6-11ab-4cdf-b469-acb56ff11735","Type":"ContainerDied","Data":"705da8d91cb45e05b6aa5ab5b116ce8252bf3f498078113a7eee5edc1d206bca"} Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.338695 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-public-tls-certs\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.338816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-config-data\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.338880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88x6\" (UniqueName: \"kubernetes.io/projected/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-kube-api-access-g88x6\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.338906 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-internal-tls-certs\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.338935 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-combined-ca-bundle\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.338967 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-config-data-custom\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.398535 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-557f889856-kwzsw"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.410676 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-64bb5bfdfc-h2mgd"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.413216 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.423088 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64bb5bfdfc-h2mgd"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.423478 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.423663 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.472417 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-f458794ff-v7p92" podUID="79c81ef9-65c7-4372-9a47-8ed93521eadf" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.492677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-config-data-custom\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.492731 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjx4h\" (UniqueName: \"kubernetes.io/projected/a004f05d-8133-4d8e-9e3c-d5c9411351ad-kube-api-access-vjx4h\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.492829 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-public-tls-certs\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.492989 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-internal-tls-certs\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493049 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-combined-ca-bundle\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493131 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-config-data\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-public-tls-certs\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493238 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-config-data\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493391 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88x6\" (UniqueName: \"kubernetes.io/projected/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-kube-api-access-g88x6\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493433 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-internal-tls-certs\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493478 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-combined-ca-bundle\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.493532 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-config-data-custom\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.522807 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-config-data-custom\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.524023 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-internal-tls-certs\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.524788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-combined-ca-bundle\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.536069 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-public-tls-certs\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.551457 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88x6\" (UniqueName: \"kubernetes.io/projected/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-kube-api-access-g88x6\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.560855 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2-config-data\") pod \"heat-cfnapi-7c65449fdf-42rxg\" (UID: \"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2\") " pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.567122 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.613900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-config-data-custom\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.613969 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjx4h\" (UniqueName: \"kubernetes.io/projected/a004f05d-8133-4d8e-9e3c-d5c9411351ad-kube-api-access-vjx4h\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.614049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-internal-tls-certs\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.614082 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-combined-ca-bundle\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.614110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-public-tls-certs\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.614140 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-config-data\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.626231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-internal-tls-certs\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.627179 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-config-data\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.627826 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-config-data-custom\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.640313 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-public-tls-certs\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.641285 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a004f05d-8133-4d8e-9e3c-d5c9411351ad-combined-ca-bundle\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.695250 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjx4h\" (UniqueName: \"kubernetes.io/projected/a004f05d-8133-4d8e-9e3c-d5c9411351ad-kube-api-access-vjx4h\") pod \"heat-api-64bb5bfdfc-h2mgd\" (UID: \"a004f05d-8133-4d8e-9e3c-d5c9411351ad\") " pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.702687 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54985c87ff-g5725"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.745228 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c7bddd46c-bnlxj"] Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.776775 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:09 crc kubenswrapper[4886]: I0129 17:08:09.869179 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.205:5353: connect: connection refused" Jan 29 17:08:10 crc kubenswrapper[4886]: I0129 17:08:10.180169 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:10 crc kubenswrapper[4886]: I0129 17:08:10.337485 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" podUID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" containerName="heat-cfnapi" containerID="cri-o://43336df2fcaf1b7acdf86423e30be9a3f4bd5a0f8198c273d550486720809b18" gracePeriod=60 Jan 29 17:08:10 crc kubenswrapper[4886]: I0129 17:08:10.812485 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:11 crc kubenswrapper[4886]: I0129 17:08:11.351427 4886 generic.go:334] "Generic (PLEG): container finished" podID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" containerID="43336df2fcaf1b7acdf86423e30be9a3f4bd5a0f8198c273d550486720809b18" exitCode=0 Jan 29 17:08:11 crc kubenswrapper[4886]: I0129 17:08:11.351528 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" event={"ID":"da0e4cf4-a01f-48df-b61b-796c8bc9f60a","Type":"ContainerDied","Data":"43336df2fcaf1b7acdf86423e30be9a3f4bd5a0f8198c273d550486720809b18"} Jan 29 17:08:11 crc kubenswrapper[4886]: I0129 17:08:11.351922 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-557f889856-kwzsw" podUID="3fa8d357-cef3-43d1-8338-386d9880bb82" containerName="heat-api" containerID="cri-o://69b0f3248bd2be75d1851a0e7878c496c05c0ca2dacd1bbce93fad67d36c48ff" gracePeriod=60 Jan 29 17:08:11 crc kubenswrapper[4886]: I0129 17:08:11.600859 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.200742 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vflxs"] Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.204844 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.219358 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vflxs"] Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.292203 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-utilities\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.292388 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tzn7\" (UniqueName: \"kubernetes.io/projected/18c5f721-30d1-48de-97e4-52399587c9d1-kube-api-access-2tzn7\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.292437 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-catalog-content\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.368712 4886 generic.go:334] "Generic (PLEG): container finished" podID="3fa8d357-cef3-43d1-8338-386d9880bb82" containerID="69b0f3248bd2be75d1851a0e7878c496c05c0ca2dacd1bbce93fad67d36c48ff" exitCode=0 Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.368771 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f889856-kwzsw" event={"ID":"3fa8d357-cef3-43d1-8338-386d9880bb82","Type":"ContainerDied","Data":"69b0f3248bd2be75d1851a0e7878c496c05c0ca2dacd1bbce93fad67d36c48ff"} Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.394384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tzn7\" (UniqueName: \"kubernetes.io/projected/18c5f721-30d1-48de-97e4-52399587c9d1-kube-api-access-2tzn7\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.394469 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-catalog-content\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.394616 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-utilities\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.395192 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-utilities\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.395252 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-catalog-content\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.418394 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tzn7\" (UniqueName: \"kubernetes.io/projected/18c5f721-30d1-48de-97e4-52399587c9d1-kube-api-access-2tzn7\") pod \"certified-operators-vflxs\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.540585 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:12 crc kubenswrapper[4886]: I0129 17:08:12.553926 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f458794ff-v7p92" Jan 29 17:08:14 crc kubenswrapper[4886]: I0129 17:08:14.034523 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" podUID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.228:8000/healthcheck\": dial tcp 10.217.0.228:8000: connect: connection refused" Jan 29 17:08:14 crc kubenswrapper[4886]: I0129 17:08:14.103284 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-557f889856-kwzsw" podUID="3fa8d357-cef3-43d1-8338-386d9880bb82" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.229:8004/healthcheck\": dial tcp 10.217.0.229:8004: connect: connection refused" Jan 29 17:08:14 crc kubenswrapper[4886]: I0129 17:08:14.869422 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.205:5353: connect: connection refused" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.194775 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.298476 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.330814 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data\") pod \"3fa8d357-cef3-43d1-8338-386d9880bb82\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.330870 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-combined-ca-bundle\") pod \"3fa8d357-cef3-43d1-8338-386d9880bb82\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.330902 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhn24\" (UniqueName: \"kubernetes.io/projected/3fa8d357-cef3-43d1-8338-386d9880bb82-kube-api-access-xhn24\") pod \"3fa8d357-cef3-43d1-8338-386d9880bb82\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.331042 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data-custom\") pod \"3fa8d357-cef3-43d1-8338-386d9880bb82\" (UID: \"3fa8d357-cef3-43d1-8338-386d9880bb82\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.345766 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3fa8d357-cef3-43d1-8338-386d9880bb82" (UID: "3fa8d357-cef3-43d1-8338-386d9880bb82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.350478 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa8d357-cef3-43d1-8338-386d9880bb82-kube-api-access-xhn24" (OuterVolumeSpecName: "kube-api-access-xhn24") pod "3fa8d357-cef3-43d1-8338-386d9880bb82" (UID: "3fa8d357-cef3-43d1-8338-386d9880bb82"). InnerVolumeSpecName "kube-api-access-xhn24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.381888 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.386023 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fa8d357-cef3-43d1-8338-386d9880bb82" (UID: "3fa8d357-cef3-43d1-8338-386d9880bb82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.419658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data" (OuterVolumeSpecName: "config-data") pod "3fa8d357-cef3-43d1-8338-386d9880bb82" (UID: "3fa8d357-cef3-43d1-8338-386d9880bb82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.433206 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data-custom\") pod \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.433381 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-nb\") pod \"80d171a6-11ab-4cdf-b469-acb56ff11735\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.433481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-svc\") pod \"80d171a6-11ab-4cdf-b469-acb56ff11735\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.433611 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-combined-ca-bundle\") pod \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.433638 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-sb\") pod \"80d171a6-11ab-4cdf-b469-acb56ff11735\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.433857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data\") pod \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.434188 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8bwl\" (UniqueName: \"kubernetes.io/projected/80d171a6-11ab-4cdf-b469-acb56ff11735-kube-api-access-t8bwl\") pod \"80d171a6-11ab-4cdf-b469-acb56ff11735\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.434235 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khr6q\" (UniqueName: \"kubernetes.io/projected/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-kube-api-access-khr6q\") pod \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\" (UID: \"da0e4cf4-a01f-48df-b61b-796c8bc9f60a\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.434297 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-swift-storage-0\") pod \"80d171a6-11ab-4cdf-b469-acb56ff11735\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.434398 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-config\") pod \"80d171a6-11ab-4cdf-b469-acb56ff11735\" (UID: \"80d171a6-11ab-4cdf-b469-acb56ff11735\") " Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.435552 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.435571 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.435587 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhn24\" (UniqueName: \"kubernetes.io/projected/3fa8d357-cef3-43d1-8338-386d9880bb82-kube-api-access-xhn24\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.435624 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8d357-cef3-43d1-8338-386d9880bb82-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.436976 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "da0e4cf4-a01f-48df-b61b-796c8bc9f60a" (UID: "da0e4cf4-a01f-48df-b61b-796c8bc9f60a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.445589 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-kube-api-access-khr6q" (OuterVolumeSpecName: "kube-api-access-khr6q") pod "da0e4cf4-a01f-48df-b61b-796c8bc9f60a" (UID: "da0e4cf4-a01f-48df-b61b-796c8bc9f60a"). InnerVolumeSpecName "kube-api-access-khr6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.447647 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d171a6-11ab-4cdf-b469-acb56ff11735-kube-api-access-t8bwl" (OuterVolumeSpecName: "kube-api-access-t8bwl") pod "80d171a6-11ab-4cdf-b469-acb56ff11735" (UID: "80d171a6-11ab-4cdf-b469-acb56ff11735"). InnerVolumeSpecName "kube-api-access-t8bwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.460433 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.460447 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6c4bddd6-xqtdm" event={"ID":"da0e4cf4-a01f-48df-b61b-796c8bc9f60a","Type":"ContainerDied","Data":"349855b0bf0483b72492372d5c1a6d697a135a4af893483f84d1a5f6df2c5a62"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.460502 4886 scope.go:117] "RemoveContainer" containerID="43336df2fcaf1b7acdf86423e30be9a3f4bd5a0f8198c273d550486720809b18" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.474314 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7bddd46c-bnlxj" event={"ID":"7b6ce536-47ec-45b9-b926-28f1fa7eb80a","Type":"ContainerStarted","Data":"961b09e7b27b7da7b2c511e013f3ab233e3894f45363e6e86d452b156483c7e5"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.474372 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7bddd46c-bnlxj" event={"ID":"7b6ce536-47ec-45b9-b926-28f1fa7eb80a","Type":"ContainerStarted","Data":"28c29d3f5a45d8f6e82cfdb663ace90ab610bc4d1d57239fe93c946573d05d45"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.475996 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.479341 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da0e4cf4-a01f-48df-b61b-796c8bc9f60a" (UID: "da0e4cf4-a01f-48df-b61b-796c8bc9f60a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.480765 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54985c87ff-g5725" event={"ID":"04a4a757-71c6-46ec-9019-8d2f64be8285","Type":"ContainerStarted","Data":"d090a953dc19f1ee4b0424500aecfa717e2c4abdf9af4db4264c3428dc2d84f8"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.480800 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54985c87ff-g5725" event={"ID":"04a4a757-71c6-46ec-9019-8d2f64be8285","Type":"ContainerStarted","Data":"7f461b34367fc19b6002113f40bc4d964e2fb98d4e2fb8a58fd1680309b095e9"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.481815 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.507863 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"be43aab6-3888-4260-a85c-147e2ae0a36d","Type":"ContainerStarted","Data":"a238adb9e047d62411d78f0b37ed4276b323e2049accd30dfa5c15023aeaa6e5"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.524623 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerStarted","Data":"463c890cb672987e4db62f57b14305282dced80284ec2842a2e3a25befe23bf9"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.527087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f889856-kwzsw" event={"ID":"3fa8d357-cef3-43d1-8338-386d9880bb82","Type":"ContainerDied","Data":"8e93f8d9b007e6405d2291aa2ff9660432275194b991846ebc2d8ccfab880ce5"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.527143 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f889856-kwzsw" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.527153 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80d171a6-11ab-4cdf-b469-acb56ff11735" (UID: "80d171a6-11ab-4cdf-b469-acb56ff11735"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.527729 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80d171a6-11ab-4cdf-b469-acb56ff11735" (UID: "80d171a6-11ab-4cdf-b469-acb56ff11735"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.552161 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8bwl\" (UniqueName: \"kubernetes.io/projected/80d171a6-11ab-4cdf-b469-acb56ff11735-kube-api-access-t8bwl\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.552188 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khr6q\" (UniqueName: \"kubernetes.io/projected/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-kube-api-access-khr6q\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.552198 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.552207 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.552218 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.552227 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.555190 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" event={"ID":"80d171a6-11ab-4cdf-b469-acb56ff11735","Type":"ContainerDied","Data":"81bf0e642c0dbb7fd724006f0c2c518606f7b43d2584453df92bcfe55b829357"} Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.555297 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-96hn8" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.563197 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c7bddd46c-bnlxj" podStartSLOduration=11.563180157 podStartE2EDuration="11.563180157s" podCreationTimestamp="2026-01-29 17:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:17.508396734 +0000 UTC m=+2780.417116006" watchObservedRunningTime="2026-01-29 17:08:17.563180157 +0000 UTC m=+2780.471899429" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.576892 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54985c87ff-g5725" podStartSLOduration=11.576875763 podStartE2EDuration="11.576875763s" podCreationTimestamp="2026-01-29 17:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:17.5586728 +0000 UTC m=+2780.467392072" watchObservedRunningTime="2026-01-29 17:08:17.576875763 +0000 UTC m=+2780.485595035" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.579547 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data" (OuterVolumeSpecName: "config-data") pod "da0e4cf4-a01f-48df-b61b-796c8bc9f60a" (UID: "da0e4cf4-a01f-48df-b61b-796c8bc9f60a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.583874 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "80d171a6-11ab-4cdf-b469-acb56ff11735" (UID: "80d171a6-11ab-4cdf-b469-acb56ff11735"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.618838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80d171a6-11ab-4cdf-b469-acb56ff11735" (UID: "80d171a6-11ab-4cdf-b469-acb56ff11735"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.629236 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.702422509 podStartE2EDuration="27.629215137s" podCreationTimestamp="2026-01-29 17:07:50 +0000 UTC" firstStartedPulling="2026-01-29 17:07:51.725421842 +0000 UTC m=+2754.634141114" lastFinishedPulling="2026-01-29 17:08:16.65221448 +0000 UTC m=+2779.560933742" observedRunningTime="2026-01-29 17:08:17.58778299 +0000 UTC m=+2780.496502262" watchObservedRunningTime="2026-01-29 17:08:17.629215137 +0000 UTC m=+2780.537934409" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.644241 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.644549 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-log" containerID="cri-o://685691dd71892e3462a49d43e961e4398610edbd2ff6858db714971fb73711e6" gracePeriod=30 Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.645268 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-httpd" containerID="cri-o://5e2f27254ecaeae6872715e18449eaa22b877597c8124da7a49920ec97100c5d" gracePeriod=30 Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.651086 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-config" (OuterVolumeSpecName: "config") pod "80d171a6-11ab-4cdf-b469-acb56ff11735" (UID: "80d171a6-11ab-4cdf-b469-acb56ff11735"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.669110 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.669141 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0e4cf4-a01f-48df-b61b-796c8bc9f60a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.669151 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.669160 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d171a6-11ab-4cdf-b469-acb56ff11735-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.683870 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c65449fdf-42rxg"] Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.696458 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vflxs"] Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.710281 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64bb5bfdfc-h2mgd"] Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.944707 4886 scope.go:117] "RemoveContainer" containerID="69b0f3248bd2be75d1851a0e7878c496c05c0ca2dacd1bbce93fad67d36c48ff" Jan 29 17:08:17 crc kubenswrapper[4886]: I0129 17:08:17.994133 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-557f889856-kwzsw"] Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.026318 4886 scope.go:117] "RemoveContainer" containerID="705da8d91cb45e05b6aa5ab5b116ce8252bf3f498078113a7eee5edc1d206bca" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.027713 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-557f889856-kwzsw"] Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.045985 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-96hn8"] Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.089686 4886 scope.go:117] "RemoveContainer" containerID="26aa10c89bd28f4d17b03fabdd3c3dd7d4b1ab633d533650ee03163b7c656cd5" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.089840 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-96hn8"] Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.089871 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f6c4bddd6-xqtdm"] Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.108335 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f6c4bddd6-xqtdm"] Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.579506 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64bb5bfdfc-h2mgd" event={"ID":"a004f05d-8133-4d8e-9e3c-d5c9411351ad","Type":"ContainerStarted","Data":"9ba203f1577fa4a2278281eb05f99b6b37f54638178327c02a931842f3130f2d"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.579788 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64bb5bfdfc-h2mgd" event={"ID":"a004f05d-8133-4d8e-9e3c-d5c9411351ad","Type":"ContainerStarted","Data":"18d920fdb752d4bed66e2d78d64074a05d8a6665fcb8abfb885f6d42e0a27fe6"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.581176 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.591816 4886 generic.go:334] "Generic (PLEG): container finished" podID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerID="d090a953dc19f1ee4b0424500aecfa717e2c4abdf9af4db4264c3428dc2d84f8" exitCode=1 Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.591872 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54985c87ff-g5725" event={"ID":"04a4a757-71c6-46ec-9019-8d2f64be8285","Type":"ContainerDied","Data":"d090a953dc19f1ee4b0424500aecfa717e2c4abdf9af4db4264c3428dc2d84f8"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.592533 4886 scope.go:117] "RemoveContainer" containerID="d090a953dc19f1ee4b0424500aecfa717e2c4abdf9af4db4264c3428dc2d84f8" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.597645 4886 generic.go:334] "Generic (PLEG): container finished" podID="18c5f721-30d1-48de-97e4-52399587c9d1" containerID="be55140e95fb2c7fd3a46b1ece79fa3d9132da294caa5ac8edf498151a8ce0b2" exitCode=0 Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.597710 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerDied","Data":"be55140e95fb2c7fd3a46b1ece79fa3d9132da294caa5ac8edf498151a8ce0b2"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.597739 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerStarted","Data":"fe354152829de757ca5537dde1fd3cfc8eb62b13a98c62b74ae6e9f6ed2f435c"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.607681 4886 generic.go:334] "Generic (PLEG): container finished" podID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerID="685691dd71892e3462a49d43e961e4398610edbd2ff6858db714971fb73711e6" exitCode=143 Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.607764 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"849de0d3-3456-44c2-bef4-3a435e4a432a","Type":"ContainerDied","Data":"685691dd71892e3462a49d43e961e4398610edbd2ff6858db714971fb73711e6"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.608950 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-64bb5bfdfc-h2mgd" podStartSLOduration=9.608929532 podStartE2EDuration="9.608929532s" podCreationTimestamp="2026-01-29 17:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:18.600614637 +0000 UTC m=+2781.509333919" watchObservedRunningTime="2026-01-29 17:08:18.608929532 +0000 UTC m=+2781.517648804" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.612371 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerStarted","Data":"d07a1d9b916e4f3e7a8a1402794315d10d0fa212b37288654a33188aff743885"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.637912 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa8d357-cef3-43d1-8338-386d9880bb82" path="/var/lib/kubelet/pods/3fa8d357-cef3-43d1-8338-386d9880bb82/volumes" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.638499 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" path="/var/lib/kubelet/pods/80d171a6-11ab-4cdf-b469-acb56ff11735/volumes" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.639081 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" path="/var/lib/kubelet/pods/da0e4cf4-a01f-48df-b61b-796c8bc9f60a/volumes" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.639988 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" event={"ID":"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2","Type":"ContainerStarted","Data":"6958576a6365fc34d774dc5015cbac18d99aa6811ed0a85bec28185deabe80bb"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.640034 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.640047 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" event={"ID":"c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2","Type":"ContainerStarted","Data":"5eed4ad641d5dcf9de58cae60e50f69e712d0406c6aff33afb9c67bd75e5be40"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.651739 4886 generic.go:334] "Generic (PLEG): container finished" podID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerID="961b09e7b27b7da7b2c511e013f3ab233e3894f45363e6e86d452b156483c7e5" exitCode=1 Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.652432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7bddd46c-bnlxj" event={"ID":"7b6ce536-47ec-45b9-b926-28f1fa7eb80a","Type":"ContainerDied","Data":"961b09e7b27b7da7b2c511e013f3ab233e3894f45363e6e86d452b156483c7e5"} Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.652731 4886 scope.go:117] "RemoveContainer" containerID="961b09e7b27b7da7b2c511e013f3ab233e3894f45363e6e86d452b156483c7e5" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.687088 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" podStartSLOduration=9.687073813 podStartE2EDuration="9.687073813s" podCreationTimestamp="2026-01-29 17:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:18.657535361 +0000 UTC m=+2781.566254633" watchObservedRunningTime="2026-01-29 17:08:18.687073813 +0000 UTC m=+2781.595793085" Jan 29 17:08:18 crc kubenswrapper[4886]: I0129 17:08:18.875689 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.523005 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-n9fr6"] Jan 29 17:08:19 crc kubenswrapper[4886]: E0129 17:08:19.523845 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="init" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.523865 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="init" Jan 29 17:08:19 crc kubenswrapper[4886]: E0129 17:08:19.523888 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="dnsmasq-dns" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.523895 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="dnsmasq-dns" Jan 29 17:08:19 crc kubenswrapper[4886]: E0129 17:08:19.523919 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" containerName="heat-cfnapi" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.523925 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" containerName="heat-cfnapi" Jan 29 17:08:19 crc kubenswrapper[4886]: E0129 17:08:19.523935 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa8d357-cef3-43d1-8338-386d9880bb82" containerName="heat-api" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.523944 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa8d357-cef3-43d1-8338-386d9880bb82" containerName="heat-api" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.524188 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0e4cf4-a01f-48df-b61b-796c8bc9f60a" containerName="heat-cfnapi" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.524223 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d171a6-11ab-4cdf-b469-acb56ff11735" containerName="dnsmasq-dns" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.524234 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa8d357-cef3-43d1-8338-386d9880bb82" containerName="heat-api" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.525405 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.544792 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-n9fr6"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.622041 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6jmdx"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.623685 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.640892 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6jmdx"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.653387 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4e9f-account-create-update-sdhth"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.655564 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.657771 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.667144 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e9f-account-create-update-sdhth"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.671155 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea6c4698-f001-402f-91e3-1e80bc7bf443-operator-scripts\") pod \"nova-api-db-create-n9fr6\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.671212 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/ea6c4698-f001-402f-91e3-1e80bc7bf443-kube-api-access-gxnqt\") pod \"nova-api-db-create-n9fr6\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.673159 4886 generic.go:334] "Generic (PLEG): container finished" podID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerID="2eb9aac70b8d95e0c6e925aa406b960e03929e9d6915153ce56a560a835d977d" exitCode=1 Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.673237 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7bddd46c-bnlxj" event={"ID":"7b6ce536-47ec-45b9-b926-28f1fa7eb80a","Type":"ContainerDied","Data":"2eb9aac70b8d95e0c6e925aa406b960e03929e9d6915153ce56a560a835d977d"} Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.673269 4886 scope.go:117] "RemoveContainer" containerID="961b09e7b27b7da7b2c511e013f3ab233e3894f45363e6e86d452b156483c7e5" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.674030 4886 scope.go:117] "RemoveContainer" containerID="2eb9aac70b8d95e0c6e925aa406b960e03929e9d6915153ce56a560a835d977d" Jan 29 17:08:19 crc kubenswrapper[4886]: E0129 17:08:19.674405 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7bddd46c-bnlxj_openstack(7b6ce536-47ec-45b9-b926-28f1fa7eb80a)\"" pod="openstack/heat-api-6c7bddd46c-bnlxj" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.704571 4886 generic.go:334] "Generic (PLEG): container finished" podID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerID="269b4adc6e6be10392170084dc412e856cfe62aa07302ce9122a8ed94105dabe" exitCode=1 Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.706468 4886 scope.go:117] "RemoveContainer" containerID="269b4adc6e6be10392170084dc412e856cfe62aa07302ce9122a8ed94105dabe" Jan 29 17:08:19 crc kubenswrapper[4886]: E0129 17:08:19.706685 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54985c87ff-g5725_openstack(04a4a757-71c6-46ec-9019-8d2f64be8285)\"" pod="openstack/heat-cfnapi-54985c87ff-g5725" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.706714 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54985c87ff-g5725" event={"ID":"04a4a757-71c6-46ec-9019-8d2f64be8285","Type":"ContainerDied","Data":"269b4adc6e6be10392170084dc412e856cfe62aa07302ce9122a8ed94105dabe"} Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.774811 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-operator-scripts\") pod \"nova-cell0-db-create-6jmdx\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.776104 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-operator-scripts\") pod \"nova-api-4e9f-account-create-update-sdhth\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.776219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea6c4698-f001-402f-91e3-1e80bc7bf443-operator-scripts\") pod \"nova-api-db-create-n9fr6\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.776265 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/ea6c4698-f001-402f-91e3-1e80bc7bf443-kube-api-access-gxnqt\") pod \"nova-api-db-create-n9fr6\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.776381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkt66\" (UniqueName: \"kubernetes.io/projected/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-kube-api-access-pkt66\") pod \"nova-cell0-db-create-6jmdx\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.776549 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n6n\" (UniqueName: \"kubernetes.io/projected/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-kube-api-access-r9n6n\") pod \"nova-api-4e9f-account-create-update-sdhth\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.778729 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea6c4698-f001-402f-91e3-1e80bc7bf443-operator-scripts\") pod \"nova-api-db-create-n9fr6\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.782771 4886 scope.go:117] "RemoveContainer" containerID="d090a953dc19f1ee4b0424500aecfa717e2c4abdf9af4db4264c3428dc2d84f8" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.828145 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vqrmb"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.829876 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.854424 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/ea6c4698-f001-402f-91e3-1e80bc7bf443-kube-api-access-gxnqt\") pod \"nova-api-db-create-n9fr6\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.884414 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vqrmb"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.884460 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9n6n\" (UniqueName: \"kubernetes.io/projected/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-kube-api-access-r9n6n\") pod \"nova-api-4e9f-account-create-update-sdhth\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.884881 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-operator-scripts\") pod \"nova-cell0-db-create-6jmdx\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.884990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-operator-scripts\") pod \"nova-api-4e9f-account-create-update-sdhth\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.885121 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkt66\" (UniqueName: \"kubernetes.io/projected/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-kube-api-access-pkt66\") pod \"nova-cell0-db-create-6jmdx\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.886779 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-operator-scripts\") pod \"nova-cell0-db-create-6jmdx\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.887759 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-operator-scripts\") pod \"nova-api-4e9f-account-create-update-sdhth\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.901309 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cc0e-account-create-update-nxk7k"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.903133 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.904766 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.907026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9n6n\" (UniqueName: \"kubernetes.io/projected/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-kube-api-access-r9n6n\") pod \"nova-api-4e9f-account-create-update-sdhth\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.907961 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkt66\" (UniqueName: \"kubernetes.io/projected/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-kube-api-access-pkt66\") pod \"nova-cell0-db-create-6jmdx\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.937939 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cc0e-account-create-update-nxk7k"] Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.946102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.978299 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.987035 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0772ac7-3374-4607-a644-f4ac2e1c078a-operator-scripts\") pod \"nova-cell1-db-create-vqrmb\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:19 crc kubenswrapper[4886]: I0129 17:08:19.987259 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmbn\" (UniqueName: \"kubernetes.io/projected/d0772ac7-3374-4607-a644-f4ac2e1c078a-kube-api-access-jtmbn\") pod \"nova-cell1-db-create-vqrmb\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.045927 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f9c8-account-create-update-hcc42"] Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.047563 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.056894 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.061548 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f9c8-account-create-update-hcc42"] Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.098678 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmbn\" (UniqueName: \"kubernetes.io/projected/d0772ac7-3374-4607-a644-f4ac2e1c078a-kube-api-access-jtmbn\") pod \"nova-cell1-db-create-vqrmb\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.103825 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af00928-6484-4071-b739-bc211ac220ef-operator-scripts\") pod \"nova-cell0-cc0e-account-create-update-nxk7k\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.104099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0772ac7-3374-4607-a644-f4ac2e1c078a-operator-scripts\") pod \"nova-cell1-db-create-vqrmb\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.113710 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk62r\" (UniqueName: \"kubernetes.io/projected/6af00928-6484-4071-b739-bc211ac220ef-kube-api-access-pk62r\") pod \"nova-cell0-cc0e-account-create-update-nxk7k\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.114894 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0772ac7-3374-4607-a644-f4ac2e1c078a-operator-scripts\") pod \"nova-cell1-db-create-vqrmb\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.125710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmbn\" (UniqueName: \"kubernetes.io/projected/d0772ac7-3374-4607-a644-f4ac2e1c078a-kube-api-access-jtmbn\") pod \"nova-cell1-db-create-vqrmb\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.141539 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.219784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk62r\" (UniqueName: \"kubernetes.io/projected/6af00928-6484-4071-b739-bc211ac220ef-kube-api-access-pk62r\") pod \"nova-cell0-cc0e-account-create-update-nxk7k\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.219948 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af00928-6484-4071-b739-bc211ac220ef-operator-scripts\") pod \"nova-cell0-cc0e-account-create-update-nxk7k\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.219985 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-operator-scripts\") pod \"nova-cell1-f9c8-account-create-update-hcc42\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.220147 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwld\" (UniqueName: \"kubernetes.io/projected/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-kube-api-access-tlwld\") pod \"nova-cell1-f9c8-account-create-update-hcc42\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.221183 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af00928-6484-4071-b739-bc211ac220ef-operator-scripts\") pod \"nova-cell0-cc0e-account-create-update-nxk7k\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.241032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk62r\" (UniqueName: \"kubernetes.io/projected/6af00928-6484-4071-b739-bc211ac220ef-kube-api-access-pk62r\") pod \"nova-cell0-cc0e-account-create-update-nxk7k\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.325734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-operator-scripts\") pod \"nova-cell1-f9c8-account-create-update-hcc42\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.325874 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwld\" (UniqueName: \"kubernetes.io/projected/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-kube-api-access-tlwld\") pod \"nova-cell1-f9c8-account-create-update-hcc42\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.326953 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-operator-scripts\") pod \"nova-cell1-f9c8-account-create-update-hcc42\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.349083 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwld\" (UniqueName: \"kubernetes.io/projected/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-kube-api-access-tlwld\") pod \"nova-cell1-f9c8-account-create-update-hcc42\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.351299 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.381785 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.405462 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.427406 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.427622 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-log" containerID="cri-o://d46a9e5456f252ab3dd8ef0ca224f83e7f91449851fd433a23e9070eb20e028e" gracePeriod=30 Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.427756 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-httpd" containerID="cri-o://819d3c493df902007da456da0899d275e457a2f0ed2e48aedaf84f652820cb61" gracePeriod=30 Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.782374 4886 scope.go:117] "RemoveContainer" containerID="2eb9aac70b8d95e0c6e925aa406b960e03929e9d6915153ce56a560a835d977d" Jan 29 17:08:20 crc kubenswrapper[4886]: E0129 17:08:20.783290 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7bddd46c-bnlxj_openstack(7b6ce536-47ec-45b9-b926-28f1fa7eb80a)\"" pod="openstack/heat-api-6c7bddd46c-bnlxj" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.791351 4886 scope.go:117] "RemoveContainer" containerID="269b4adc6e6be10392170084dc412e856cfe62aa07302ce9122a8ed94105dabe" Jan 29 17:08:20 crc kubenswrapper[4886]: E0129 17:08:20.791612 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54985c87ff-g5725_openstack(04a4a757-71c6-46ec-9019-8d2f64be8285)\"" pod="openstack/heat-cfnapi-54985c87ff-g5725" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.802542 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerStarted","Data":"afb5da406ee3b16e59af7913d87b7d9742dbcfd595f22b00884d57064f6bdef1"} Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.822591 4886 generic.go:334] "Generic (PLEG): container finished" podID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerID="d46a9e5456f252ab3dd8ef0ca224f83e7f91449851fd433a23e9070eb20e028e" exitCode=143 Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.823510 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf","Type":"ContainerDied","Data":"d46a9e5456f252ab3dd8ef0ca224f83e7f91449851fd433a23e9070eb20e028e"} Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.896694 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6jmdx"] Jan 29 17:08:20 crc kubenswrapper[4886]: I0129 17:08:20.939730 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e9f-account-create-update-sdhth"] Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.014900 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-n9fr6"] Jan 29 17:08:21 crc kubenswrapper[4886]: W0129 17:08:21.091410 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea6c4698_f001_402f_91e3_1e80bc7bf443.slice/crio-97aa039de70a06170f71988b76c9396909f3b7178da4b75eb9a0fd7d820bb21d WatchSource:0}: Error finding container 97aa039de70a06170f71988b76c9396909f3b7178da4b75eb9a0fd7d820bb21d: Status 404 returned error can't find the container with id 97aa039de70a06170f71988b76c9396909f3b7178da4b75eb9a0fd7d820bb21d Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.569690 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vqrmb"] Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.593912 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f9c8-account-create-update-hcc42"] Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.620104 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cc0e-account-create-update-nxk7k"] Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.855144 4886 generic.go:334] "Generic (PLEG): container finished" podID="0abefc39-4eb0-4600-8e11-b5d4af3c11b4" containerID="8cff761f0cac80358e499809ffa647d36a191c7af1a493dc00f71f33ae4223f1" exitCode=0 Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.855856 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6jmdx" event={"ID":"0abefc39-4eb0-4600-8e11-b5d4af3c11b4","Type":"ContainerDied","Data":"8cff761f0cac80358e499809ffa647d36a191c7af1a493dc00f71f33ae4223f1"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.855973 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6jmdx" event={"ID":"0abefc39-4eb0-4600-8e11-b5d4af3c11b4","Type":"ContainerStarted","Data":"1d6d2eb795c39ee31f6bd0a881882b56df9889d142ea82ed82c62281b1f67996"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.861647 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" event={"ID":"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb","Type":"ContainerStarted","Data":"e1eabc32a80d150906ee8042c9b91dd9d3a691eb3e8f2321170f2610258d0695"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.868024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n9fr6" event={"ID":"ea6c4698-f001-402f-91e3-1e80bc7bf443","Type":"ContainerStarted","Data":"92b4d1b2f475024d893ea29a83366ecc7f80ef2e9282821adbce174622472058"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.868068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n9fr6" event={"ID":"ea6c4698-f001-402f-91e3-1e80bc7bf443","Type":"ContainerStarted","Data":"97aa039de70a06170f71988b76c9396909f3b7178da4b75eb9a0fd7d820bb21d"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.876576 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" event={"ID":"6af00928-6484-4071-b739-bc211ac220ef","Type":"ContainerStarted","Data":"91c7222c3b9f7d5be92754c25f343aeff5c1732b0217924a2ad1edc9eaf57e78"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.891865 4886 generic.go:334] "Generic (PLEG): container finished" podID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerID="5e2f27254ecaeae6872715e18449eaa22b877597c8124da7a49920ec97100c5d" exitCode=0 Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.891986 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"849de0d3-3456-44c2-bef4-3a435e4a432a","Type":"ContainerDied","Data":"5e2f27254ecaeae6872715e18449eaa22b877597c8124da7a49920ec97100c5d"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.917524 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerStarted","Data":"97f8f5e0387fde773bf154bf18b428f934c3b6dd32a6b73bb76a513b5a291c63"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.918156 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-central-agent" containerID="cri-o://5d0ddc2798e73cd33929ee945c72ef848dc6759a75fd9fcc95c2f939f265b877" gracePeriod=30 Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.918547 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.918592 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="proxy-httpd" containerID="cri-o://97f8f5e0387fde773bf154bf18b428f934c3b6dd32a6b73bb76a513b5a291c63" gracePeriod=30 Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.918672 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="sg-core" containerID="cri-o://d07a1d9b916e4f3e7a8a1402794315d10d0fa212b37288654a33188aff743885" gracePeriod=30 Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.918721 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-notification-agent" containerID="cri-o://463c890cb672987e4db62f57b14305282dced80284ec2842a2e3a25befe23bf9" gracePeriod=30 Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.920549 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vqrmb" event={"ID":"d0772ac7-3374-4607-a644-f4ac2e1c078a","Type":"ContainerStarted","Data":"56926e28702f7f49449b25045bd4430aca71c4abfb7465c1932db4f3abec35bc"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.929379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e9f-account-create-update-sdhth" event={"ID":"d13e59b2-0b15-4b7f-b158-ea16ec2b5416","Type":"ContainerStarted","Data":"b398660f408eb077ec37e46aac34f95a01068c141577a940f5d64dfc4dc0b027"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.929424 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e9f-account-create-update-sdhth" event={"ID":"d13e59b2-0b15-4b7f-b158-ea16ec2b5416","Type":"ContainerStarted","Data":"e4805d6955b6d3e0ebc12d0484bdd410741675cd4a31046222f6b6bd45082c68"} Jan 29 17:08:21 crc kubenswrapper[4886]: I0129 17:08:21.960667 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.028104266 podStartE2EDuration="18.960649395s" podCreationTimestamp="2026-01-29 17:08:03 +0000 UTC" firstStartedPulling="2026-01-29 17:08:06.648673064 +0000 UTC m=+2769.557392336" lastFinishedPulling="2026-01-29 17:08:20.581218193 +0000 UTC m=+2783.489937465" observedRunningTime="2026-01-29 17:08:21.948494192 +0000 UTC m=+2784.857213464" watchObservedRunningTime="2026-01-29 17:08:21.960649395 +0000 UTC m=+2784.869368667" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.224854 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.225147 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.225644 4886 scope.go:117] "RemoveContainer" containerID="269b4adc6e6be10392170084dc412e856cfe62aa07302ce9122a8ed94105dabe" Jan 29 17:08:22 crc kubenswrapper[4886]: E0129 17:08:22.225972 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54985c87ff-g5725_openstack(04a4a757-71c6-46ec-9019-8d2f64be8285)\"" pod="openstack/heat-cfnapi-54985c87ff-g5725" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.237826 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.237874 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.238787 4886 scope.go:117] "RemoveContainer" containerID="2eb9aac70b8d95e0c6e925aa406b960e03929e9d6915153ce56a560a835d977d" Jan 29 17:08:22 crc kubenswrapper[4886]: E0129 17:08:22.239023 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7bddd46c-bnlxj_openstack(7b6ce536-47ec-45b9-b926-28f1fa7eb80a)\"" pod="openstack/heat-api-6c7bddd46c-bnlxj" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.312445 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.431882 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-scripts\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.432078 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-httpd-run\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.432588 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.432899 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.432986 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglvx\" (UniqueName: \"kubernetes.io/projected/849de0d3-3456-44c2-bef4-3a435e4a432a-kube-api-access-fglvx\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.433019 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-public-tls-certs\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.433035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-combined-ca-bundle\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.433062 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-logs\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.433110 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-config-data\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.433710 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.436546 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-logs" (OuterVolumeSpecName: "logs") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.442988 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849de0d3-3456-44c2-bef4-3a435e4a432a-kube-api-access-fglvx" (OuterVolumeSpecName: "kube-api-access-fglvx") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "kube-api-access-fglvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.473350 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-scripts" (OuterVolumeSpecName: "scripts") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.537000 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.537030 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglvx\" (UniqueName: \"kubernetes.io/projected/849de0d3-3456-44c2-bef4-3a435e4a432a-kube-api-access-fglvx\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.537041 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/849de0d3-3456-44c2-bef4-3a435e4a432a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.641434 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1" (OuterVolumeSpecName: "glance") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: E0129 17:08:22.642147 4886 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") : UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes/kubernetes.io~csi/pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes/kubernetes.io~csi/pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1/vol_data.json]: open /var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes/kubernetes.io~csi/pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"849de0d3-3456-44c2-bef4-3a435e4a432a\" (UID: \"849de0d3-3456-44c2-bef4-3a435e4a432a\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes/kubernetes.io~csi/pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes/kubernetes.io~csi/pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1/vol_data.json]: open /var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes/kubernetes.io~csi/pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1/vol_data.json: no such file or directory" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.643163 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") on node \"crc\" " Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.704180 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.748233 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.760972 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.762422 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1") on node "crc" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.801988 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-config-data" (OuterVolumeSpecName: "config-data") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.852658 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.852700 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.877644 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "849de0d3-3456-44c2-bef4-3a435e4a432a" (UID: "849de0d3-3456-44c2-bef4-3a435e4a432a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.954605 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/849de0d3-3456-44c2-bef4-3a435e4a432a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.972994 4886 generic.go:334] "Generic (PLEG): container finished" podID="ea6c4698-f001-402f-91e3-1e80bc7bf443" containerID="92b4d1b2f475024d893ea29a83366ecc7f80ef2e9282821adbce174622472058" exitCode=0 Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.973070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n9fr6" event={"ID":"ea6c4698-f001-402f-91e3-1e80bc7bf443","Type":"ContainerDied","Data":"92b4d1b2f475024d893ea29a83366ecc7f80ef2e9282821adbce174622472058"} Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.988109 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" event={"ID":"6af00928-6484-4071-b739-bc211ac220ef","Type":"ContainerStarted","Data":"e03fdcc391c686ad6f7c447bf2012b345cc1a12adaddfc3b0b7fbabe7adbed61"} Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.999001 4886 generic.go:334] "Generic (PLEG): container finished" podID="18c5f721-30d1-48de-97e4-52399587c9d1" containerID="afb5da406ee3b16e59af7913d87b7d9742dbcfd595f22b00884d57064f6bdef1" exitCode=0 Jan 29 17:08:22 crc kubenswrapper[4886]: I0129 17:08:22.999066 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerDied","Data":"afb5da406ee3b16e59af7913d87b7d9742dbcfd595f22b00884d57064f6bdef1"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.011256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"849de0d3-3456-44c2-bef4-3a435e4a432a","Type":"ContainerDied","Data":"6c945ea15f303c81064b58dfa01521088d6d511849d81e35019f4fd66c782c28"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.011302 4886 scope.go:117] "RemoveContainer" containerID="5e2f27254ecaeae6872715e18449eaa22b877597c8124da7a49920ec97100c5d" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.011451 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.015034 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" podStartSLOduration=4.015020022 podStartE2EDuration="4.015020022s" podCreationTimestamp="2026-01-29 17:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:23.005218226 +0000 UTC m=+2785.913937508" watchObservedRunningTime="2026-01-29 17:08:23.015020022 +0000 UTC m=+2785.923739294" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025025 4886 generic.go:334] "Generic (PLEG): container finished" podID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerID="97f8f5e0387fde773bf154bf18b428f934c3b6dd32a6b73bb76a513b5a291c63" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025057 4886 generic.go:334] "Generic (PLEG): container finished" podID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerID="d07a1d9b916e4f3e7a8a1402794315d10d0fa212b37288654a33188aff743885" exitCode=2 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025065 4886 generic.go:334] "Generic (PLEG): container finished" podID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerID="463c890cb672987e4db62f57b14305282dced80284ec2842a2e3a25befe23bf9" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025071 4886 generic.go:334] "Generic (PLEG): container finished" podID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerID="5d0ddc2798e73cd33929ee945c72ef848dc6759a75fd9fcc95c2f939f265b877" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025118 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerDied","Data":"97f8f5e0387fde773bf154bf18b428f934c3b6dd32a6b73bb76a513b5a291c63"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerDied","Data":"d07a1d9b916e4f3e7a8a1402794315d10d0fa212b37288654a33188aff743885"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerDied","Data":"463c890cb672987e4db62f57b14305282dced80284ec2842a2e3a25befe23bf9"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.025727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerDied","Data":"5d0ddc2798e73cd33929ee945c72ef848dc6759a75fd9fcc95c2f939f265b877"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.035905 4886 generic.go:334] "Generic (PLEG): container finished" podID="d0772ac7-3374-4607-a644-f4ac2e1c078a" containerID="e75acdd55522e91761ce2d771dbc17900e4f53d297811cf9623f07bc70ba7052" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.035985 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vqrmb" event={"ID":"d0772ac7-3374-4607-a644-f4ac2e1c078a","Type":"ContainerDied","Data":"e75acdd55522e91761ce2d771dbc17900e4f53d297811cf9623f07bc70ba7052"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.041704 4886 generic.go:334] "Generic (PLEG): container finished" podID="d13e59b2-0b15-4b7f-b158-ea16ec2b5416" containerID="b398660f408eb077ec37e46aac34f95a01068c141577a940f5d64dfc4dc0b027" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.041768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e9f-account-create-update-sdhth" event={"ID":"d13e59b2-0b15-4b7f-b158-ea16ec2b5416","Type":"ContainerDied","Data":"b398660f408eb077ec37e46aac34f95a01068c141577a940f5d64dfc4dc0b027"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.054487 4886 generic.go:334] "Generic (PLEG): container finished" podID="8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" containerID="55979afc492dd3730aa23e20e090c57835e6091af47e18bbcd87fee5afa8dde9" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.054936 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" event={"ID":"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb","Type":"ContainerDied","Data":"55979afc492dd3730aa23e20e090c57835e6091af47e18bbcd87fee5afa8dde9"} Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.137544 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163194 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-sg-core-conf-yaml\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163346 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-log-httpd\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163411 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-scripts\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163478 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-run-httpd\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163576 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt8hq\" (UniqueName: \"kubernetes.io/projected/e0ea79fe-a2e5-4861-be91-aba220b1b221-kube-api-access-rt8hq\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163612 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-combined-ca-bundle\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.163755 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-config-data\") pod \"e0ea79fe-a2e5-4861-be91-aba220b1b221\" (UID: \"e0ea79fe-a2e5-4861-be91-aba220b1b221\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.165352 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.167042 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.175843 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.179631 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ea79fe-a2e5-4861-be91-aba220b1b221-kube-api-access-rt8hq" (OuterVolumeSpecName: "kube-api-access-rt8hq") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "kube-api-access-rt8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.199143 4886 scope.go:117] "RemoveContainer" containerID="685691dd71892e3462a49d43e961e4398610edbd2ff6858db714971fb73711e6" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.199258 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.204253 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-scripts" (OuterVolumeSpecName: "scripts") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.265461 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:08:23 crc kubenswrapper[4886]: E0129 17:08:23.267277 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="proxy-httpd" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267297 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="proxy-httpd" Jan 29 17:08:23 crc kubenswrapper[4886]: E0129 17:08:23.267355 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-notification-agent" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267364 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-notification-agent" Jan 29 17:08:23 crc kubenswrapper[4886]: E0129 17:08:23.267382 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="sg-core" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267389 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="sg-core" Jan 29 17:08:23 crc kubenswrapper[4886]: E0129 17:08:23.267406 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-httpd" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267414 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-httpd" Jan 29 17:08:23 crc kubenswrapper[4886]: E0129 17:08:23.267434 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-log" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267442 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-log" Jan 29 17:08:23 crc kubenswrapper[4886]: E0129 17:08:23.267460 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-central-agent" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267468 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-central-agent" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267745 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-central-agent" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267762 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="sg-core" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267779 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="ceilometer-notification-agent" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267794 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-log" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267802 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" containerName="glance-httpd" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.267822 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" containerName="proxy-httpd" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.268410 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.268439 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.268471 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0ea79fe-a2e5-4861-be91-aba220b1b221-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.268537 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt8hq\" (UniqueName: \"kubernetes.io/projected/e0ea79fe-a2e5-4861-be91-aba220b1b221-kube-api-access-rt8hq\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.269797 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.271881 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.272088 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.291947 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.295684 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.347200 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.364813 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-config-data" (OuterVolumeSpecName: "config-data") pod "e0ea79fe-a2e5-4861-be91-aba220b1b221" (UID: "e0ea79fe-a2e5-4861-be91-aba220b1b221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375084 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-logs\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmlh\" (UniqueName: \"kubernetes.io/projected/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-kube-api-access-7nmlh\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375627 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375781 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375798 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375848 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.375912 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.376053 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.376065 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.376074 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ea79fe-a2e5-4861-be91-aba220b1b221-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478444 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478530 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmlh\" (UniqueName: \"kubernetes.io/projected/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-kube-api-access-7nmlh\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478607 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478692 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478714 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478748 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478796 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.478853 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-logs\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.481128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.482258 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.482289 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9fc1bf04f61733e1543e4c6d32069c38c610c3d0fa9a349fa6a409f3542d3c50/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.485483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.485819 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-logs\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.488669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.489555 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.490247 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.518392 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmlh\" (UniqueName: \"kubernetes.io/projected/2dbf03ea-9df9-4f03-aee9-113dabed1c7a-kube-api-access-7nmlh\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.549255 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.576563 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a580962-e55c-4bdc-ba31-c39bc4f20fb1\") pod \"glance-default-external-api-0\" (UID: \"2dbf03ea-9df9-4f03-aee9-113dabed1c7a\") " pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.580123 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea6c4698-f001-402f-91e3-1e80bc7bf443-operator-scripts\") pod \"ea6c4698-f001-402f-91e3-1e80bc7bf443\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.580291 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/ea6c4698-f001-402f-91e3-1e80bc7bf443-kube-api-access-gxnqt\") pod \"ea6c4698-f001-402f-91e3-1e80bc7bf443\" (UID: \"ea6c4698-f001-402f-91e3-1e80bc7bf443\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.581477 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea6c4698-f001-402f-91e3-1e80bc7bf443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea6c4698-f001-402f-91e3-1e80bc7bf443" (UID: "ea6c4698-f001-402f-91e3-1e80bc7bf443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.590225 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6c4698-f001-402f-91e3-1e80bc7bf443-kube-api-access-gxnqt" (OuterVolumeSpecName: "kube-api-access-gxnqt") pod "ea6c4698-f001-402f-91e3-1e80bc7bf443" (UID: "ea6c4698-f001-402f-91e3-1e80bc7bf443"). InnerVolumeSpecName "kube-api-access-gxnqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.591699 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.684094 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea6c4698-f001-402f-91e3-1e80bc7bf443-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.684859 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/ea6c4698-f001-402f-91e3-1e80bc7bf443-kube-api-access-gxnqt\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.967978 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.979642 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.995292 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-operator-scripts\") pod \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.995448 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9n6n\" (UniqueName: \"kubernetes.io/projected/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-kube-api-access-r9n6n\") pod \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.995486 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-operator-scripts\") pod \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\" (UID: \"d13e59b2-0b15-4b7f-b158-ea16ec2b5416\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.995590 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkt66\" (UniqueName: \"kubernetes.io/projected/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-kube-api-access-pkt66\") pod \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\" (UID: \"0abefc39-4eb0-4600-8e11-b5d4af3c11b4\") " Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.995957 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0abefc39-4eb0-4600-8e11-b5d4af3c11b4" (UID: "0abefc39-4eb0-4600-8e11-b5d4af3c11b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.996644 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4886]: I0129 17:08:23.996893 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d13e59b2-0b15-4b7f-b158-ea16ec2b5416" (UID: "d13e59b2-0b15-4b7f-b158-ea16ec2b5416"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.003856 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-kube-api-access-pkt66" (OuterVolumeSpecName: "kube-api-access-pkt66") pod "0abefc39-4eb0-4600-8e11-b5d4af3c11b4" (UID: "0abefc39-4eb0-4600-8e11-b5d4af3c11b4"). InnerVolumeSpecName "kube-api-access-pkt66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.005266 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-kube-api-access-r9n6n" (OuterVolumeSpecName: "kube-api-access-r9n6n") pod "d13e59b2-0b15-4b7f-b158-ea16ec2b5416" (UID: "d13e59b2-0b15-4b7f-b158-ea16ec2b5416"). InnerVolumeSpecName "kube-api-access-r9n6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.069216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-n9fr6" event={"ID":"ea6c4698-f001-402f-91e3-1e80bc7bf443","Type":"ContainerDied","Data":"97aa039de70a06170f71988b76c9396909f3b7178da4b75eb9a0fd7d820bb21d"} Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.069259 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97aa039de70a06170f71988b76c9396909f3b7178da4b75eb9a0fd7d820bb21d" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.069342 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-n9fr6" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.077335 4886 generic.go:334] "Generic (PLEG): container finished" podID="6af00928-6484-4071-b739-bc211ac220ef" containerID="e03fdcc391c686ad6f7c447bf2012b345cc1a12adaddfc3b0b7fbabe7adbed61" exitCode=0 Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.077397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" event={"ID":"6af00928-6484-4071-b739-bc211ac220ef","Type":"ContainerDied","Data":"e03fdcc391c686ad6f7c447bf2012b345cc1a12adaddfc3b0b7fbabe7adbed61"} Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.093817 4886 generic.go:334] "Generic (PLEG): container finished" podID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerID="819d3c493df902007da456da0899d275e457a2f0ed2e48aedaf84f652820cb61" exitCode=0 Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.093885 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf","Type":"ContainerDied","Data":"819d3c493df902007da456da0899d275e457a2f0ed2e48aedaf84f652820cb61"} Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.098805 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9n6n\" (UniqueName: \"kubernetes.io/projected/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-kube-api-access-r9n6n\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.098833 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13e59b2-0b15-4b7f-b158-ea16ec2b5416-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.098843 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkt66\" (UniqueName: \"kubernetes.io/projected/0abefc39-4eb0-4600-8e11-b5d4af3c11b4-kube-api-access-pkt66\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.102962 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.112658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0ea79fe-a2e5-4861-be91-aba220b1b221","Type":"ContainerDied","Data":"928834e62ea2e840bea0af8f378a7be863b8582e831ecb530090b696cd7380b1"} Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.112726 4886 scope.go:117] "RemoveContainer" containerID="97f8f5e0387fde773bf154bf18b428f934c3b6dd32a6b73bb76a513b5a291c63" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.112891 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.135848 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e9f-account-create-update-sdhth" event={"ID":"d13e59b2-0b15-4b7f-b158-ea16ec2b5416","Type":"ContainerDied","Data":"e4805d6955b6d3e0ebc12d0484bdd410741675cd4a31046222f6b6bd45082c68"} Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.135876 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4805d6955b6d3e0ebc12d0484bdd410741675cd4a31046222f6b6bd45082c68" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.139678 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e9f-account-create-update-sdhth" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.151697 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jmdx" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.152452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6jmdx" event={"ID":"0abefc39-4eb0-4600-8e11-b5d4af3c11b4","Type":"ContainerDied","Data":"1d6d2eb795c39ee31f6bd0a881882b56df9889d142ea82ed82c62281b1f67996"} Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.152491 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6d2eb795c39ee31f6bd0a881882b56df9889d142ea82ed82c62281b1f67996" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.216105 4886 scope.go:117] "RemoveContainer" containerID="d07a1d9b916e4f3e7a8a1402794315d10d0fa212b37288654a33188aff743885" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.235402 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.252093 4886 scope.go:117] "RemoveContainer" containerID="463c890cb672987e4db62f57b14305282dced80284ec2842a2e3a25befe23bf9" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.291080 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.296098 4886 scope.go:117] "RemoveContainer" containerID="5d0ddc2798e73cd33929ee945c72ef848dc6759a75fd9fcc95c2f939f265b877" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.318218 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:24 crc kubenswrapper[4886]: E0129 17:08:24.320194 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abefc39-4eb0-4600-8e11-b5d4af3c11b4" containerName="mariadb-database-create" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.320424 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abefc39-4eb0-4600-8e11-b5d4af3c11b4" containerName="mariadb-database-create" Jan 29 17:08:24 crc kubenswrapper[4886]: E0129 17:08:24.320487 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13e59b2-0b15-4b7f-b158-ea16ec2b5416" containerName="mariadb-account-create-update" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.320504 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13e59b2-0b15-4b7f-b158-ea16ec2b5416" containerName="mariadb-account-create-update" Jan 29 17:08:24 crc kubenswrapper[4886]: E0129 17:08:24.320593 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6c4698-f001-402f-91e3-1e80bc7bf443" containerName="mariadb-database-create" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.320606 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6c4698-f001-402f-91e3-1e80bc7bf443" containerName="mariadb-database-create" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.321338 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abefc39-4eb0-4600-8e11-b5d4af3c11b4" containerName="mariadb-database-create" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.321369 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13e59b2-0b15-4b7f-b158-ea16ec2b5416" containerName="mariadb-account-create-update" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.321384 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6c4698-f001-402f-91e3-1e80bc7bf443" containerName="mariadb-database-create" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.325709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.331586 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.331780 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.350182 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.516531 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-run-httpd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.516668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.516719 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.516869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-log-httpd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.516927 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdsd\" (UniqueName: \"kubernetes.io/projected/291e8ff3-6792-4900-86a1-df3730548041-kube-api-access-hmdsd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.516997 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-config-data\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.517120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-scripts\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.624881 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-log-httpd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.624955 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmdsd\" (UniqueName: \"kubernetes.io/projected/291e8ff3-6792-4900-86a1-df3730548041-kube-api-access-hmdsd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.625005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-config-data\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.625133 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-scripts\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.625193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-run-httpd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.625269 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.625965 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-log-httpd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.626941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.636048 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.636290 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-run-httpd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.637728 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-config-data\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.646642 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.647219 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-scripts\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.651608 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmdsd\" (UniqueName: \"kubernetes.io/projected/291e8ff3-6792-4900-86a1-df3730548041-kube-api-access-hmdsd\") pod \"ceilometer-0\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.671565 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.672912 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849de0d3-3456-44c2-bef4-3a435e4a432a" path="/var/lib/kubelet/pods/849de0d3-3456-44c2-bef4-3a435e4a432a/volumes" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.676221 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ea79fe-a2e5-4861-be91-aba220b1b221" path="/var/lib/kubelet/pods/e0ea79fe-a2e5-4861-be91-aba220b1b221/volumes" Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.861023 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:08:24 crc kubenswrapper[4886]: I0129 17:08:24.865383 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045276 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-scripts\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045556 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-internal-tls-certs\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045599 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhpzr\" (UniqueName: \"kubernetes.io/projected/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-kube-api-access-fhpzr\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045724 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-httpd-run\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045775 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-config-data\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045877 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-combined-ca-bundle\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.045930 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-logs\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.058699 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-logs" (OuterVolumeSpecName: "logs") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.058841 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\" (UID: \"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.064807 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.063393 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.143774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-scripts" (OuterVolumeSpecName: "scripts") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.143883 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-kube-api-access-fhpzr" (OuterVolumeSpecName: "kube-api-access-fhpzr") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "kube-api-access-fhpzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.143948 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019" (OuterVolumeSpecName: "glance") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.146934 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.171997 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") on node \"crc\" " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.172059 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.172086 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.172098 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhpzr\" (UniqueName: \"kubernetes.io/projected/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-kube-api-access-fhpzr\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.172107 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.191067 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vqrmb" event={"ID":"d0772ac7-3374-4607-a644-f4ac2e1c078a","Type":"ContainerDied","Data":"56926e28702f7f49449b25045bd4430aca71c4abfb7465c1932db4f3abec35bc"} Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.191116 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56926e28702f7f49449b25045bd4430aca71c4abfb7465c1932db4f3abec35bc" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.208008 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.211079 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" event={"ID":"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb","Type":"ContainerDied","Data":"e1eabc32a80d150906ee8042c9b91dd9d3a691eb3e8f2321170f2610258d0695"} Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.211116 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1eabc32a80d150906ee8042c9b91dd9d3a691eb3e8f2321170f2610258d0695" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.219250 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dbf03ea-9df9-4f03-aee9-113dabed1c7a","Type":"ContainerStarted","Data":"7ae008cfe708205b4ec455c74e5866300c590e18ba606d283c32108e0e208c62"} Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.227115 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-config-data" (OuterVolumeSpecName: "config-data") pod "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" (UID: "16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.240768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerStarted","Data":"62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903"} Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.279055 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.279094 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.279234 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.279621 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf","Type":"ContainerDied","Data":"71bc8d6cf1178c38541a40863263406b012b61b297b4f5183d44e11e56405a8a"} Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.279728 4886 scope.go:117] "RemoveContainer" containerID="819d3c493df902007da456da0899d275e457a2f0ed2e48aedaf84f652820cb61" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.279914 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.282260 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.282555 4886 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.282667 4886 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019") on node "crc" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.314215 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vflxs" podStartSLOduration=8.333269489 podStartE2EDuration="13.31419847s" podCreationTimestamp="2026-01-29 17:08:12 +0000 UTC" firstStartedPulling="2026-01-29 17:08:18.599067094 +0000 UTC m=+2781.507786356" lastFinishedPulling="2026-01-29 17:08:23.579996065 +0000 UTC m=+2786.488715337" observedRunningTime="2026-01-29 17:08:25.275440358 +0000 UTC m=+2788.184159640" watchObservedRunningTime="2026-01-29 17:08:25.31419847 +0000 UTC m=+2788.222917742" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.380061 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-operator-scripts\") pod \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.380505 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtmbn\" (UniqueName: \"kubernetes.io/projected/d0772ac7-3374-4607-a644-f4ac2e1c078a-kube-api-access-jtmbn\") pod \"d0772ac7-3374-4607-a644-f4ac2e1c078a\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.380535 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0772ac7-3374-4607-a644-f4ac2e1c078a-operator-scripts\") pod \"d0772ac7-3374-4607-a644-f4ac2e1c078a\" (UID: \"d0772ac7-3374-4607-a644-f4ac2e1c078a\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.380580 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlwld\" (UniqueName: \"kubernetes.io/projected/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-kube-api-access-tlwld\") pod \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\" (UID: \"8258df8a-fd9a-4546-8ea7-ce4b7f7180bb\") " Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.380658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" (UID: "8258df8a-fd9a-4546-8ea7-ce4b7f7180bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.385596 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0772ac7-3374-4607-a644-f4ac2e1c078a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0772ac7-3374-4607-a644-f4ac2e1c078a" (UID: "d0772ac7-3374-4607-a644-f4ac2e1c078a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.391544 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0772ac7-3374-4607-a644-f4ac2e1c078a-kube-api-access-jtmbn" (OuterVolumeSpecName: "kube-api-access-jtmbn") pod "d0772ac7-3374-4607-a644-f4ac2e1c078a" (UID: "d0772ac7-3374-4607-a644-f4ac2e1c078a"). InnerVolumeSpecName "kube-api-access-jtmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.391686 4886 scope.go:117] "RemoveContainer" containerID="d46a9e5456f252ab3dd8ef0ca224f83e7f91449851fd433a23e9070eb20e028e" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.393320 4886 reconciler_common.go:293] "Volume detached for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.393385 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.393398 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtmbn\" (UniqueName: \"kubernetes.io/projected/d0772ac7-3374-4607-a644-f4ac2e1c078a-kube-api-access-jtmbn\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.393411 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0772ac7-3374-4607-a644-f4ac2e1c078a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.403617 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-kube-api-access-tlwld" (OuterVolumeSpecName: "kube-api-access-tlwld") pod "8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" (UID: "8258df8a-fd9a-4546-8ea7-ce4b7f7180bb"). InnerVolumeSpecName "kube-api-access-tlwld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.441911 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.453825 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.465376 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:08:25 crc kubenswrapper[4886]: E0129 17:08:25.466213 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-httpd" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466228 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-httpd" Jan 29 17:08:25 crc kubenswrapper[4886]: E0129 17:08:25.466239 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0772ac7-3374-4607-a644-f4ac2e1c078a" containerName="mariadb-database-create" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466246 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0772ac7-3374-4607-a644-f4ac2e1c078a" containerName="mariadb-database-create" Jan 29 17:08:25 crc kubenswrapper[4886]: E0129 17:08:25.466298 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-log" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466306 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-log" Jan 29 17:08:25 crc kubenswrapper[4886]: E0129 17:08:25.466316 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" containerName="mariadb-account-create-update" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466353 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" containerName="mariadb-account-create-update" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466648 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-log" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466699 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" containerName="mariadb-account-create-update" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466719 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" containerName="glance-httpd" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.466767 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0772ac7-3374-4607-a644-f4ac2e1c078a" containerName="mariadb-database-create" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.468423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.473511 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.473745 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.496181 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlwld\" (UniqueName: \"kubernetes.io/projected/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb-kube-api-access-tlwld\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.503732 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.562839 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.608532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.609153 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.609644 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.610018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.610069 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.610148 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81437be4-b399-40e9-9c33-e71319326af8-logs\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.610275 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m4n7\" (UniqueName: \"kubernetes.io/projected/81437be4-b399-40e9-9c33-e71319326af8-kube-api-access-2m4n7\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.614600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81437be4-b399-40e9-9c33-e71319326af8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.717622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81437be4-b399-40e9-9c33-e71319326af8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.717684 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.717902 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.717946 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.717998 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.718023 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.718067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81437be4-b399-40e9-9c33-e71319326af8-logs\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.718114 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m4n7\" (UniqueName: \"kubernetes.io/projected/81437be4-b399-40e9-9c33-e71319326af8-kube-api-access-2m4n7\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.721217 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81437be4-b399-40e9-9c33-e71319326af8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.733238 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81437be4-b399-40e9-9c33-e71319326af8-logs\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.736434 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.736462 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7b71ee9dc20b2cd8e0489051d74fcf4864cc02a892819f8a5785e080087446e/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.747626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.749028 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.751623 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.756054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81437be4-b399-40e9-9c33-e71319326af8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:25 crc kubenswrapper[4886]: I0129 17:08:25.789046 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m4n7\" (UniqueName: \"kubernetes.io/projected/81437be4-b399-40e9-9c33-e71319326af8-kube-api-access-2m4n7\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.033571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3be0811-27ce-4f01-a1ee-e88ee60ba019\") pod \"glance-default-internal-api-0\" (UID: \"81437be4-b399-40e9-9c33-e71319326af8\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.110113 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.196081 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.338193 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk62r\" (UniqueName: \"kubernetes.io/projected/6af00928-6484-4071-b739-bc211ac220ef-kube-api-access-pk62r\") pod \"6af00928-6484-4071-b739-bc211ac220ef\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.338900 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af00928-6484-4071-b739-bc211ac220ef-operator-scripts\") pod \"6af00928-6484-4071-b739-bc211ac220ef\" (UID: \"6af00928-6484-4071-b739-bc211ac220ef\") " Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.340058 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af00928-6484-4071-b739-bc211ac220ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6af00928-6484-4071-b739-bc211ac220ef" (UID: "6af00928-6484-4071-b739-bc211ac220ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.358566 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af00928-6484-4071-b739-bc211ac220ef-kube-api-access-pk62r" (OuterVolumeSpecName: "kube-api-access-pk62r") pod "6af00928-6484-4071-b739-bc211ac220ef" (UID: "6af00928-6484-4071-b739-bc211ac220ef"). InnerVolumeSpecName "kube-api-access-pk62r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.379562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerStarted","Data":"e9683c7a0a1e9a4a4afcaf55416c4d002525f6149a721a2eb46199347f8c0103"} Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.380515 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-846d49f49c-kc98b" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.396258 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" event={"ID":"6af00928-6484-4071-b739-bc211ac220ef","Type":"ContainerDied","Data":"91c7222c3b9f7d5be92754c25f343aeff5c1732b0217924a2ad1edc9eaf57e78"} Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.396296 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c7222c3b9f7d5be92754c25f343aeff5c1732b0217924a2ad1edc9eaf57e78" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.396495 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cc0e-account-create-update-nxk7k" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.405152 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f9c8-account-create-update-hcc42" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.406090 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqrmb" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.480730 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af00928-6484-4071-b739-bc211ac220ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.480762 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk62r\" (UniqueName: \"kubernetes.io/projected/6af00928-6484-4071-b739-bc211ac220ef-kube-api-access-pk62r\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.500754 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7854df7c4b-dn4j7"] Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.501157 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7854df7c4b-dn4j7" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-api" containerID="cri-o://75e8cf0cad7d6d59d88f3f3bd6a97cab33d3691af01126d62cdae48b3d82240f" gracePeriod=30 Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.501389 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7854df7c4b-dn4j7" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-httpd" containerID="cri-o://f3ee0a56aaca61cef2419de911db690ccd8876c78a545e2b8864e16aa4ff333a" gracePeriod=30 Jan 29 17:08:26 crc kubenswrapper[4886]: I0129 17:08:26.645683 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf" path="/var/lib/kubelet/pods/16c3788a-e2f7-4af4-8c2e-dc5aad6f3dbf/volumes" Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.254696 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5f6fd667fd-4s5hk" Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.281375 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.329165 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-54f8bbfbf-9qjxm"] Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.329373 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-54f8bbfbf-9qjxm" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerName="heat-engine" containerID="cri-o://b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" gracePeriod=60 Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.473976 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dbf03ea-9df9-4f03-aee9-113dabed1c7a","Type":"ContainerStarted","Data":"ef497bed49a4a288b6a5bb91a3f5de21fdb4d87b94282ea416c9156beaf4f5d8"} Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.492230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81437be4-b399-40e9-9c33-e71319326af8","Type":"ContainerStarted","Data":"28b98de75177e4384713e1d50e58b8c51918e7f32830394947da1871c49de6bb"} Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.509676 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerID="f3ee0a56aaca61cef2419de911db690ccd8876c78a545e2b8864e16aa4ff333a" exitCode=0 Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.509750 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7854df7c4b-dn4j7" event={"ID":"0ff8b641-0d76-41ce-b6ac-7d708effebc0","Type":"ContainerDied","Data":"f3ee0a56aaca61cef2419de911db690ccd8876c78a545e2b8864e16aa4ff333a"} Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.546823 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerStarted","Data":"3ddc8827ee40ed9c34df4f01749ce22387bf3f776bb544ffddfacdc88b3c01b2"} Jan 29 17:08:27 crc kubenswrapper[4886]: I0129 17:08:27.982740 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7c65449fdf-42rxg" Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.078783 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-64bb5bfdfc-h2mgd" Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.086940 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54985c87ff-g5725"] Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.274918 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c7bddd46c-bnlxj"] Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.633128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dbf03ea-9df9-4f03-aee9-113dabed1c7a","Type":"ContainerStarted","Data":"0d92f69f91eee5fa6fac8149c03fa945659bdf95b999773a4673f1504dac0060"} Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.673238 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81437be4-b399-40e9-9c33-e71319326af8","Type":"ContainerStarted","Data":"df0462edbf1213821887b3f3e0e071cded45cf21e034be49f29377b0f167d78e"} Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.673275 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerStarted","Data":"16b1fa849040aab8f0e2883ea043b834d6db5438318a7823960a49828f277bbc"} Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.673286 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerStarted","Data":"c99184ccff1048cbbd7bc7dc522f9a1c02ed8d7c96b828fa7d43e50b4bf7d853"} Jan 29 17:08:28 crc kubenswrapper[4886]: E0129 17:08:28.729973 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 17:08:28 crc kubenswrapper[4886]: E0129 17:08:28.738428 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 17:08:28 crc kubenswrapper[4886]: E0129 17:08:28.762454 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 17:08:28 crc kubenswrapper[4886]: E0129 17:08:28.762539 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-54f8bbfbf-9qjxm" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerName="heat-engine" Jan 29 17:08:28 crc kubenswrapper[4886]: I0129 17:08:28.823557 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.823533952 podStartE2EDuration="5.823533952s" podCreationTimestamp="2026-01-29 17:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:28.800193895 +0000 UTC m=+2791.708913177" watchObservedRunningTime="2026-01-29 17:08:28.823533952 +0000 UTC m=+2791.732253224" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.014867 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.118784 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data-custom\") pod \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.118831 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle\") pod \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.118944 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data\") pod \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.126543 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6cjb\" (UniqueName: \"kubernetes.io/projected/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-kube-api-access-p6cjb\") pod \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.198966 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.216627 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7b6ce536-47ec-45b9-b926-28f1fa7eb80a" (UID: "7b6ce536-47ec-45b9-b926-28f1fa7eb80a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.216717 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-kube-api-access-p6cjb" (OuterVolumeSpecName: "kube-api-access-p6cjb") pod "7b6ce536-47ec-45b9-b926-28f1fa7eb80a" (UID: "7b6ce536-47ec-45b9-b926-28f1fa7eb80a"). InnerVolumeSpecName "kube-api-access-p6cjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.231605 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b6ce536-47ec-45b9-b926-28f1fa7eb80a" (UID: "7b6ce536-47ec-45b9-b926-28f1fa7eb80a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.243391 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle\") pod \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\" (UID: \"7b6ce536-47ec-45b9-b926-28f1fa7eb80a\") " Jan 29 17:08:29 crc kubenswrapper[4886]: W0129 17:08:29.244023 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7b6ce536-47ec-45b9-b926-28f1fa7eb80a/volumes/kubernetes.io~secret/combined-ca-bundle Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.244049 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b6ce536-47ec-45b9-b926-28f1fa7eb80a" (UID: "7b6ce536-47ec-45b9-b926-28f1fa7eb80a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.244592 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.244638 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.244674 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6cjb\" (UniqueName: \"kubernetes.io/projected/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-kube-api-access-p6cjb\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.295220 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data" (OuterVolumeSpecName: "config-data") pod "7b6ce536-47ec-45b9-b926-28f1fa7eb80a" (UID: "7b6ce536-47ec-45b9-b926-28f1fa7eb80a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.350273 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb8rb\" (UniqueName: \"kubernetes.io/projected/04a4a757-71c6-46ec-9019-8d2f64be8285-kube-api-access-bb8rb\") pod \"04a4a757-71c6-46ec-9019-8d2f64be8285\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.350343 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data\") pod \"04a4a757-71c6-46ec-9019-8d2f64be8285\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.350435 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data-custom\") pod \"04a4a757-71c6-46ec-9019-8d2f64be8285\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.350470 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-combined-ca-bundle\") pod \"04a4a757-71c6-46ec-9019-8d2f64be8285\" (UID: \"04a4a757-71c6-46ec-9019-8d2f64be8285\") " Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.350853 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6ce536-47ec-45b9-b926-28f1fa7eb80a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.354161 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04a4a757-71c6-46ec-9019-8d2f64be8285" (UID: "04a4a757-71c6-46ec-9019-8d2f64be8285"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.357429 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a4a757-71c6-46ec-9019-8d2f64be8285-kube-api-access-bb8rb" (OuterVolumeSpecName: "kube-api-access-bb8rb") pod "04a4a757-71c6-46ec-9019-8d2f64be8285" (UID: "04a4a757-71c6-46ec-9019-8d2f64be8285"). InnerVolumeSpecName "kube-api-access-bb8rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.392165 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04a4a757-71c6-46ec-9019-8d2f64be8285" (UID: "04a4a757-71c6-46ec-9019-8d2f64be8285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.440401 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data" (OuterVolumeSpecName: "config-data") pod "04a4a757-71c6-46ec-9019-8d2f64be8285" (UID: "04a4a757-71c6-46ec-9019-8d2f64be8285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.452562 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.452792 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.452851 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb8rb\" (UniqueName: \"kubernetes.io/projected/04a4a757-71c6-46ec-9019-8d2f64be8285-kube-api-access-bb8rb\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.452907 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a4a757-71c6-46ec-9019-8d2f64be8285-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.660442 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.660733 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.686665 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81437be4-b399-40e9-9c33-e71319326af8","Type":"ContainerStarted","Data":"6449b6c9d1c44f3a9f4fafbfeac03bbccd9b2e03eed7084df5eed46099830409"} Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.688994 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7bddd46c-bnlxj" event={"ID":"7b6ce536-47ec-45b9-b926-28f1fa7eb80a","Type":"ContainerDied","Data":"28c29d3f5a45d8f6e82cfdb663ace90ab610bc4d1d57239fe93c946573d05d45"} Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.689032 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7bddd46c-bnlxj" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.689049 4886 scope.go:117] "RemoveContainer" containerID="2eb9aac70b8d95e0c6e925aa406b960e03929e9d6915153ce56a560a835d977d" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.706371 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerID="75e8cf0cad7d6d59d88f3f3bd6a97cab33d3691af01126d62cdae48b3d82240f" exitCode=0 Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.706444 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7854df7c4b-dn4j7" event={"ID":"0ff8b641-0d76-41ce-b6ac-7d708effebc0","Type":"ContainerDied","Data":"75e8cf0cad7d6d59d88f3f3bd6a97cab33d3691af01126d62cdae48b3d82240f"} Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.716750 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54985c87ff-g5725" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.716800 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54985c87ff-g5725" event={"ID":"04a4a757-71c6-46ec-9019-8d2f64be8285","Type":"ContainerDied","Data":"7f461b34367fc19b6002113f40bc4d964e2fb98d4e2fb8a58fd1680309b095e9"} Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.721438 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.721397462 podStartE2EDuration="4.721397462s" podCreationTimestamp="2026-01-29 17:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:29.70891259 +0000 UTC m=+2792.617631862" watchObservedRunningTime="2026-01-29 17:08:29.721397462 +0000 UTC m=+2792.630116724" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.768268 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c7bddd46c-bnlxj"] Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.780300 4886 scope.go:117] "RemoveContainer" containerID="269b4adc6e6be10392170084dc412e856cfe62aa07302ce9122a8ed94105dabe" Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.786386 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c7bddd46c-bnlxj"] Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.937643 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54985c87ff-g5725"] Jan 29 17:08:29 crc kubenswrapper[4886]: I0129 17:08:29.947170 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-54985c87ff-g5725"] Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.310247 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.357465 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c4q4z"] Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360446 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-httpd" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360476 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-httpd" Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360502 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerName="heat-api" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360511 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerName="heat-api" Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360540 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerName="heat-cfnapi" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360547 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerName="heat-cfnapi" Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360560 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerName="heat-cfnapi" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360565 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerName="heat-cfnapi" Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360576 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerName="heat-api" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360581 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerName="heat-api" Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360594 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-api" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360599 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-api" Jan 29 17:08:30 crc kubenswrapper[4886]: E0129 17:08:30.360609 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af00928-6484-4071-b739-bc211ac220ef" containerName="mariadb-account-create-update" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360615 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af00928-6484-4071-b739-bc211ac220ef" containerName="mariadb-account-create-update" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360930 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerName="heat-cfnapi" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360948 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerName="heat-api" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360964 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-httpd" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360975 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" containerName="neutron-api" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.360988 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" containerName="heat-cfnapi" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.361002 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af00928-6484-4071-b739-bc211ac220ef" containerName="mariadb-account-create-update" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.361821 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.373374 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.373682 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.373865 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wcdz5" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.403817 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c4q4z"] Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.490591 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-httpd-config\") pod \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.490789 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-combined-ca-bundle\") pod \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.491564 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-ovndb-tls-certs\") pod \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.491735 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjq8\" (UniqueName: \"kubernetes.io/projected/0ff8b641-0d76-41ce-b6ac-7d708effebc0-kube-api-access-nhjq8\") pod \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.491828 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-config\") pod \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\" (UID: \"0ff8b641-0d76-41ce-b6ac-7d708effebc0\") " Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.492539 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hdc\" (UniqueName: \"kubernetes.io/projected/c467eb7e-a553-4fc5-b366-607a30fe18dd-kube-api-access-97hdc\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.492615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-scripts\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.492670 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.493009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-config-data\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.515876 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0ff8b641-0d76-41ce-b6ac-7d708effebc0" (UID: "0ff8b641-0d76-41ce-b6ac-7d708effebc0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.516070 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff8b641-0d76-41ce-b6ac-7d708effebc0-kube-api-access-nhjq8" (OuterVolumeSpecName: "kube-api-access-nhjq8") pod "0ff8b641-0d76-41ce-b6ac-7d708effebc0" (UID: "0ff8b641-0d76-41ce-b6ac-7d708effebc0"). InnerVolumeSpecName "kube-api-access-nhjq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.595059 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-config-data\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.595174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hdc\" (UniqueName: \"kubernetes.io/projected/c467eb7e-a553-4fc5-b366-607a30fe18dd-kube-api-access-97hdc\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.595221 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-scripts\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.595249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.595386 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjq8\" (UniqueName: \"kubernetes.io/projected/0ff8b641-0d76-41ce-b6ac-7d708effebc0-kube-api-access-nhjq8\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.595399 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.614186 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-config-data\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.624756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-scripts\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.629930 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hdc\" (UniqueName: \"kubernetes.io/projected/c467eb7e-a553-4fc5-b366-607a30fe18dd-kube-api-access-97hdc\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.634297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c4q4z\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.643371 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a4a757-71c6-46ec-9019-8d2f64be8285" path="/var/lib/kubelet/pods/04a4a757-71c6-46ec-9019-8d2f64be8285/volumes" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.644346 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" path="/var/lib/kubelet/pods/7b6ce536-47ec-45b9-b926-28f1fa7eb80a/volumes" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.655789 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-config" (OuterVolumeSpecName: "config") pod "0ff8b641-0d76-41ce-b6ac-7d708effebc0" (UID: "0ff8b641-0d76-41ce-b6ac-7d708effebc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.662450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ff8b641-0d76-41ce-b6ac-7d708effebc0" (UID: "0ff8b641-0d76-41ce-b6ac-7d708effebc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.691871 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.697701 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.697729 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.731241 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7854df7c4b-dn4j7" event={"ID":"0ff8b641-0d76-41ce-b6ac-7d708effebc0","Type":"ContainerDied","Data":"e7a3e9e15910d73e70e0b6e954b7743de9f55b25dd0f0bfd34c348eb738633d2"} Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.731538 4886 scope.go:117] "RemoveContainer" containerID="f3ee0a56aaca61cef2419de911db690ccd8876c78a545e2b8864e16aa4ff333a" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.731660 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7854df7c4b-dn4j7" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.756134 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerStarted","Data":"75f2a13548205c6b54e7c335c35141a38cfd5ad2dd6734bb6fdb9670a340bd2a"} Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.757763 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.760442 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0ff8b641-0d76-41ce-b6ac-7d708effebc0" (UID: "0ff8b641-0d76-41ce-b6ac-7d708effebc0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.793304 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.213679715 podStartE2EDuration="6.793286942s" podCreationTimestamp="2026-01-29 17:08:24 +0000 UTC" firstStartedPulling="2026-01-29 17:08:25.589958567 +0000 UTC m=+2788.498677839" lastFinishedPulling="2026-01-29 17:08:30.169565794 +0000 UTC m=+2793.078285066" observedRunningTime="2026-01-29 17:08:30.787705635 +0000 UTC m=+2793.696424907" watchObservedRunningTime="2026-01-29 17:08:30.793286942 +0000 UTC m=+2793.702006214" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.800174 4886 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8b641-0d76-41ce-b6ac-7d708effebc0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:30 crc kubenswrapper[4886]: I0129 17:08:30.950580 4886 scope.go:117] "RemoveContainer" containerID="75e8cf0cad7d6d59d88f3f3bd6a97cab33d3691af01126d62cdae48b3d82240f" Jan 29 17:08:31 crc kubenswrapper[4886]: I0129 17:08:31.087045 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7854df7c4b-dn4j7"] Jan 29 17:08:31 crc kubenswrapper[4886]: I0129 17:08:31.096548 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7854df7c4b-dn4j7"] Jan 29 17:08:31 crc kubenswrapper[4886]: I0129 17:08:31.357226 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c4q4z"] Jan 29 17:08:31 crc kubenswrapper[4886]: W0129 17:08:31.362012 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc467eb7e_a553_4fc5_b366_607a30fe18dd.slice/crio-e030969deba149d036416125fae7ad0b0c1ce2a5efabff4aeea1c2936fb7a1ec WatchSource:0}: Error finding container e030969deba149d036416125fae7ad0b0c1ce2a5efabff4aeea1c2936fb7a1ec: Status 404 returned error can't find the container with id e030969deba149d036416125fae7ad0b0c1ce2a5efabff4aeea1c2936fb7a1ec Jan 29 17:08:31 crc kubenswrapper[4886]: I0129 17:08:31.806566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" event={"ID":"c467eb7e-a553-4fc5-b366-607a30fe18dd","Type":"ContainerStarted","Data":"e030969deba149d036416125fae7ad0b0c1ce2a5efabff4aeea1c2936fb7a1ec"} Jan 29 17:08:32 crc kubenswrapper[4886]: I0129 17:08:32.540984 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:32 crc kubenswrapper[4886]: I0129 17:08:32.542217 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:32 crc kubenswrapper[4886]: I0129 17:08:32.613961 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:32 crc kubenswrapper[4886]: I0129 17:08:32.629143 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff8b641-0d76-41ce-b6ac-7d708effebc0" path="/var/lib/kubelet/pods/0ff8b641-0d76-41ce-b6ac-7d708effebc0/volumes" Jan 29 17:08:32 crc kubenswrapper[4886]: I0129 17:08:32.888708 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:32 crc kubenswrapper[4886]: I0129 17:08:32.957531 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vflxs"] Jan 29 17:08:33 crc kubenswrapper[4886]: I0129 17:08:33.592106 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 17:08:33 crc kubenswrapper[4886]: I0129 17:08:33.592192 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 17:08:33 crc kubenswrapper[4886]: I0129 17:08:33.682112 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 17:08:33 crc kubenswrapper[4886]: I0129 17:08:33.787369 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 17:08:33 crc kubenswrapper[4886]: I0129 17:08:33.847444 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 17:08:33 crc kubenswrapper[4886]: I0129 17:08:33.847486 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 17:08:34 crc kubenswrapper[4886]: I0129 17:08:34.860731 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vflxs" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="registry-server" containerID="cri-o://62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903" gracePeriod=2 Jan 29 17:08:35 crc kubenswrapper[4886]: I0129 17:08:35.877541 4886 generic.go:334] "Generic (PLEG): container finished" podID="18c5f721-30d1-48de-97e4-52399587c9d1" containerID="62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903" exitCode=0 Jan 29 17:08:35 crc kubenswrapper[4886]: I0129 17:08:35.877614 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerDied","Data":"62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903"} Jan 29 17:08:36 crc kubenswrapper[4886]: I0129 17:08:36.111339 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:36 crc kubenswrapper[4886]: I0129 17:08:36.111389 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:36 crc kubenswrapper[4886]: I0129 17:08:36.189604 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:36 crc kubenswrapper[4886]: I0129 17:08:36.189692 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:36 crc kubenswrapper[4886]: I0129 17:08:36.906960 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:36 crc kubenswrapper[4886]: I0129 17:08:36.907446 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:38 crc kubenswrapper[4886]: E0129 17:08:38.711315 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 17:08:38 crc kubenswrapper[4886]: E0129 17:08:38.717111 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 17:08:38 crc kubenswrapper[4886]: E0129 17:08:38.720037 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 17:08:38 crc kubenswrapper[4886]: E0129 17:08:38.720099 4886 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-54f8bbfbf-9qjxm" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerName="heat-engine" Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.924787 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.924829 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.987149 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.987453 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-central-agent" containerID="cri-o://3ddc8827ee40ed9c34df4f01749ce22387bf3f776bb544ffddfacdc88b3c01b2" gracePeriod=30 Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.987568 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="proxy-httpd" containerID="cri-o://75f2a13548205c6b54e7c335c35141a38cfd5ad2dd6734bb6fdb9670a340bd2a" gracePeriod=30 Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.987692 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-notification-agent" containerID="cri-o://c99184ccff1048cbbd7bc7dc522f9a1c02ed8d7c96b828fa7d43e50b4bf7d853" gracePeriod=30 Jan 29 17:08:38 crc kubenswrapper[4886]: I0129 17:08:38.987687 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="sg-core" containerID="cri-o://16b1fa849040aab8f0e2883ea043b834d6db5438318a7823960a49828f277bbc" gracePeriod=30 Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.945195 4886 generic.go:334] "Generic (PLEG): container finished" podID="291e8ff3-6792-4900-86a1-df3730548041" containerID="75f2a13548205c6b54e7c335c35141a38cfd5ad2dd6734bb6fdb9670a340bd2a" exitCode=0 Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.945238 4886 generic.go:334] "Generic (PLEG): container finished" podID="291e8ff3-6792-4900-86a1-df3730548041" containerID="16b1fa849040aab8f0e2883ea043b834d6db5438318a7823960a49828f277bbc" exitCode=2 Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.945251 4886 generic.go:334] "Generic (PLEG): container finished" podID="291e8ff3-6792-4900-86a1-df3730548041" containerID="3ddc8827ee40ed9c34df4f01749ce22387bf3f776bb544ffddfacdc88b3c01b2" exitCode=0 Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.945276 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerDied","Data":"75f2a13548205c6b54e7c335c35141a38cfd5ad2dd6734bb6fdb9670a340bd2a"} Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.945307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerDied","Data":"16b1fa849040aab8f0e2883ea043b834d6db5438318a7823960a49828f277bbc"} Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.945317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerDied","Data":"3ddc8827ee40ed9c34df4f01749ce22387bf3f776bb544ffddfacdc88b3c01b2"} Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.962504 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.962596 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.963668 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.991012 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:39 crc kubenswrapper[4886]: I0129 17:08:39.991089 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:40 crc kubenswrapper[4886]: I0129 17:08:40.531451 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:40 crc kubenswrapper[4886]: I0129 17:08:40.961291 4886 generic.go:334] "Generic (PLEG): container finished" podID="291e8ff3-6792-4900-86a1-df3730548041" containerID="c99184ccff1048cbbd7bc7dc522f9a1c02ed8d7c96b828fa7d43e50b4bf7d853" exitCode=0 Jan 29 17:08:40 crc kubenswrapper[4886]: I0129 17:08:40.961533 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerDied","Data":"c99184ccff1048cbbd7bc7dc522f9a1c02ed8d7c96b828fa7d43e50b4bf7d853"} Jan 29 17:08:42 crc kubenswrapper[4886]: E0129 17:08:42.541609 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903 is running failed: container process not found" containerID="62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 17:08:42 crc kubenswrapper[4886]: E0129 17:08:42.542368 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903 is running failed: container process not found" containerID="62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 17:08:42 crc kubenswrapper[4886]: E0129 17:08:42.542676 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903 is running failed: container process not found" containerID="62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 17:08:42 crc kubenswrapper[4886]: E0129 17:08:42.542712 4886 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-vflxs" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="registry-server" Jan 29 17:08:42 crc kubenswrapper[4886]: I0129 17:08:42.982828 4886 generic.go:334] "Generic (PLEG): container finished" podID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" exitCode=0 Jan 29 17:08:42 crc kubenswrapper[4886]: I0129 17:08:42.982972 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54f8bbfbf-9qjxm" event={"ID":"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f","Type":"ContainerDied","Data":"b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f"} Jan 29 17:08:43 crc kubenswrapper[4886]: I0129 17:08:43.998970 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vflxs" event={"ID":"18c5f721-30d1-48de-97e4-52399587c9d1","Type":"ContainerDied","Data":"fe354152829de757ca5537dde1fd3cfc8eb62b13a98c62b74ae6e9f6ed2f435c"} Jan 29 17:08:43 crc kubenswrapper[4886]: I0129 17:08:43.999245 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe354152829de757ca5537dde1fd3cfc8eb62b13a98c62b74ae6e9f6ed2f435c" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.158963 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.239168 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-catalog-content\") pod \"18c5f721-30d1-48de-97e4-52399587c9d1\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.239264 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-utilities\") pod \"18c5f721-30d1-48de-97e4-52399587c9d1\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.240039 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-utilities" (OuterVolumeSpecName: "utilities") pod "18c5f721-30d1-48de-97e4-52399587c9d1" (UID: "18c5f721-30d1-48de-97e4-52399587c9d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.240240 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tzn7\" (UniqueName: \"kubernetes.io/projected/18c5f721-30d1-48de-97e4-52399587c9d1-kube-api-access-2tzn7\") pod \"18c5f721-30d1-48de-97e4-52399587c9d1\" (UID: \"18c5f721-30d1-48de-97e4-52399587c9d1\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.240959 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.249581 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c5f721-30d1-48de-97e4-52399587c9d1-kube-api-access-2tzn7" (OuterVolumeSpecName: "kube-api-access-2tzn7") pod "18c5f721-30d1-48de-97e4-52399587c9d1" (UID: "18c5f721-30d1-48de-97e4-52399587c9d1"). InnerVolumeSpecName "kube-api-access-2tzn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.289733 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18c5f721-30d1-48de-97e4-52399587c9d1" (UID: "18c5f721-30d1-48de-97e4-52399587c9d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.344468 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18c5f721-30d1-48de-97e4-52399587c9d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.344752 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tzn7\" (UniqueName: \"kubernetes.io/projected/18c5f721-30d1-48de-97e4-52399587c9d1-kube-api-access-2tzn7\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.617645 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.629648 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754074 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data\") pod \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754126 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data-custom\") pod \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754183 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-run-httpd\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754223 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-combined-ca-bundle\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754290 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-sg-core-conf-yaml\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754353 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-log-httpd\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754443 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmdsd\" (UniqueName: \"kubernetes.io/projected/291e8ff3-6792-4900-86a1-df3730548041-kube-api-access-hmdsd\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754476 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-combined-ca-bundle\") pod \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754538 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-scripts\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754552 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-config-data\") pod \"291e8ff3-6792-4900-86a1-df3730548041\" (UID: \"291e8ff3-6792-4900-86a1-df3730548041\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.754592 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn4rg\" (UniqueName: \"kubernetes.io/projected/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-kube-api-access-bn4rg\") pod \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\" (UID: \"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f\") " Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.756021 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.756869 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.758880 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-scripts" (OuterVolumeSpecName: "scripts") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.759363 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291e8ff3-6792-4900-86a1-df3730548041-kube-api-access-hmdsd" (OuterVolumeSpecName: "kube-api-access-hmdsd") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "kube-api-access-hmdsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.759547 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-kube-api-access-bn4rg" (OuterVolumeSpecName: "kube-api-access-bn4rg") pod "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" (UID: "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f"). InnerVolumeSpecName "kube-api-access-bn4rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.760801 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" (UID: "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.789808 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" (UID: "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.794923 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.828797 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data" (OuterVolumeSpecName: "config-data") pod "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" (UID: "92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857775 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857811 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857822 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857831 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857839 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e8ff3-6792-4900-86a1-df3730548041-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857848 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmdsd\" (UniqueName: \"kubernetes.io/projected/291e8ff3-6792-4900-86a1-df3730548041-kube-api-access-hmdsd\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857858 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857867 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.857876 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn4rg\" (UniqueName: \"kubernetes.io/projected/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f-kube-api-access-bn4rg\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.867097 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.887478 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-config-data" (OuterVolumeSpecName: "config-data") pod "291e8ff3-6792-4900-86a1-df3730548041" (UID: "291e8ff3-6792-4900-86a1-df3730548041"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.960358 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:44 crc kubenswrapper[4886]: I0129 17:08:44.960398 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e8ff3-6792-4900-86a1-df3730548041-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.015584 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e8ff3-6792-4900-86a1-df3730548041","Type":"ContainerDied","Data":"e9683c7a0a1e9a4a4afcaf55416c4d002525f6149a721a2eb46199347f8c0103"} Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.015642 4886 scope.go:117] "RemoveContainer" containerID="75f2a13548205c6b54e7c335c35141a38cfd5ad2dd6734bb6fdb9670a340bd2a" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.015778 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.021256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54f8bbfbf-9qjxm" event={"ID":"92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f","Type":"ContainerDied","Data":"0f319e6982b89bee08a0388a5eb4c63bb973328dc67504ccea174e9928171156"} Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.021376 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54f8bbfbf-9qjxm" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.031883 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vflxs" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.032638 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" event={"ID":"c467eb7e-a553-4fc5-b366-607a30fe18dd","Type":"ContainerStarted","Data":"b316bbc4bed9ea6d21a1f48ac1daf91a604e958e8664a1c95a0d70b2476abcfa"} Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.068757 4886 scope.go:117] "RemoveContainer" containerID="16b1fa849040aab8f0e2883ea043b834d6db5438318a7823960a49828f277bbc" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.132204 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" podStartSLOduration=2.5024291720000003 podStartE2EDuration="15.132178266s" podCreationTimestamp="2026-01-29 17:08:30 +0000 UTC" firstStartedPulling="2026-01-29 17:08:31.369684627 +0000 UTC m=+2794.278403909" lastFinishedPulling="2026-01-29 17:08:43.999433731 +0000 UTC m=+2806.908153003" observedRunningTime="2026-01-29 17:08:45.066785934 +0000 UTC m=+2807.975505216" watchObservedRunningTime="2026-01-29 17:08:45.132178266 +0000 UTC m=+2808.040897538" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.156382 4886 scope.go:117] "RemoveContainer" containerID="c99184ccff1048cbbd7bc7dc522f9a1c02ed8d7c96b828fa7d43e50b4bf7d853" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.169956 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vflxs"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.191932 4886 scope.go:117] "RemoveContainer" containerID="3ddc8827ee40ed9c34df4f01749ce22387bf3f776bb544ffddfacdc88b3c01b2" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.196376 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vflxs"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.208019 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-54f8bbfbf-9qjxm"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.219286 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-54f8bbfbf-9qjxm"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.236792 4886 scope.go:117] "RemoveContainer" containerID="b974dc7a13dfe4723bbe5629a3fd12f5dbc56e7cab5fd25c13a1d891ca45ce3f" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.237441 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.254382 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.270620 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271179 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerName="heat-engine" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271200 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerName="heat-engine" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271306 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="proxy-httpd" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271316 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="proxy-httpd" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271347 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-central-agent" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271356 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-central-agent" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271369 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="registry-server" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271377 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="registry-server" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271394 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="extract-utilities" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271402 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="extract-utilities" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271418 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="extract-content" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271426 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="extract-content" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271439 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="sg-core" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271446 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="sg-core" Jan 29 17:08:45 crc kubenswrapper[4886]: E0129 17:08:45.271467 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-notification-agent" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271475 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-notification-agent" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271831 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-central-agent" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271860 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6ce536-47ec-45b9-b926-28f1fa7eb80a" containerName="heat-api" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271877 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" containerName="registry-server" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271889 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="ceilometer-notification-agent" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271904 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="sg-core" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271927 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" containerName="heat-engine" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.271948 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e8ff3-6792-4900-86a1-df3730548041" containerName="proxy-httpd" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.274593 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.277706 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.281542 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.313376 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373065 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-config-data\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373129 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373275 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373306 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrz2g\" (UniqueName: \"kubernetes.io/projected/dc82dcdd-793c-4083-9143-1b04037f40d3-kube-api-access-wrz2g\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373353 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-run-httpd\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-scripts\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.373426 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-log-httpd\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.475437 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-log-httpd\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.475713 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-config-data\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.475784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.475909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-log-httpd\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.475956 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.476010 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrz2g\" (UniqueName: \"kubernetes.io/projected/dc82dcdd-793c-4083-9143-1b04037f40d3-kube-api-access-wrz2g\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.476074 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-run-httpd\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.476105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-scripts\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.477150 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-run-httpd\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.480593 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-scripts\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.480666 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.480678 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-config-data\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.481850 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.499136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrz2g\" (UniqueName: \"kubernetes.io/projected/dc82dcdd-793c-4083-9143-1b04037f40d3-kube-api-access-wrz2g\") pod \"ceilometer-0\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " pod="openstack/ceilometer-0" Jan 29 17:08:45 crc kubenswrapper[4886]: I0129 17:08:45.699409 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:08:46 crc kubenswrapper[4886]: I0129 17:08:46.185890 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:46 crc kubenswrapper[4886]: W0129 17:08:46.193667 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc82dcdd_793c_4083_9143_1b04037f40d3.slice/crio-17d5fd5f42ae0736004ca73847456c411bd6a9d8d5a5c3344ecb73c5ac5a2736 WatchSource:0}: Error finding container 17d5fd5f42ae0736004ca73847456c411bd6a9d8d5a5c3344ecb73c5ac5a2736: Status 404 returned error can't find the container with id 17d5fd5f42ae0736004ca73847456c411bd6a9d8d5a5c3344ecb73c5ac5a2736 Jan 29 17:08:46 crc kubenswrapper[4886]: I0129 17:08:46.629123 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c5f721-30d1-48de-97e4-52399587c9d1" path="/var/lib/kubelet/pods/18c5f721-30d1-48de-97e4-52399587c9d1/volumes" Jan 29 17:08:46 crc kubenswrapper[4886]: I0129 17:08:46.630350 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291e8ff3-6792-4900-86a1-df3730548041" path="/var/lib/kubelet/pods/291e8ff3-6792-4900-86a1-df3730548041/volumes" Jan 29 17:08:46 crc kubenswrapper[4886]: I0129 17:08:46.631520 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f" path="/var/lib/kubelet/pods/92e92176-b984-4dd5-8ea0-8bcb3dbe5e2f/volumes" Jan 29 17:08:47 crc kubenswrapper[4886]: I0129 17:08:47.062353 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerStarted","Data":"17d5fd5f42ae0736004ca73847456c411bd6a9d8d5a5c3344ecb73c5ac5a2736"} Jan 29 17:08:49 crc kubenswrapper[4886]: I0129 17:08:49.085909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerStarted","Data":"027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97"} Jan 29 17:08:49 crc kubenswrapper[4886]: I0129 17:08:49.086534 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerStarted","Data":"8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20"} Jan 29 17:08:50 crc kubenswrapper[4886]: I0129 17:08:50.105199 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerStarted","Data":"6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8"} Jan 29 17:08:54 crc kubenswrapper[4886]: I0129 17:08:54.160549 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerStarted","Data":"b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4"} Jan 29 17:08:54 crc kubenswrapper[4886]: I0129 17:08:54.161241 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:08:54 crc kubenswrapper[4886]: I0129 17:08:54.190773 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.36065191 podStartE2EDuration="9.190754165s" podCreationTimestamp="2026-01-29 17:08:45 +0000 UTC" firstStartedPulling="2026-01-29 17:08:46.197297995 +0000 UTC m=+2809.106017267" lastFinishedPulling="2026-01-29 17:08:53.02740023 +0000 UTC m=+2815.936119522" observedRunningTime="2026-01-29 17:08:54.178023057 +0000 UTC m=+2817.086742329" watchObservedRunningTime="2026-01-29 17:08:54.190754165 +0000 UTC m=+2817.099473437" Jan 29 17:08:57 crc kubenswrapper[4886]: I0129 17:08:57.716428 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:57 crc kubenswrapper[4886]: I0129 17:08:57.717708 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-central-agent" containerID="cri-o://8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20" gracePeriod=30 Jan 29 17:08:57 crc kubenswrapper[4886]: I0129 17:08:57.717731 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="sg-core" containerID="cri-o://6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8" gracePeriod=30 Jan 29 17:08:57 crc kubenswrapper[4886]: I0129 17:08:57.717854 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-notification-agent" containerID="cri-o://027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97" gracePeriod=30 Jan 29 17:08:57 crc kubenswrapper[4886]: I0129 17:08:57.717867 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="proxy-httpd" containerID="cri-o://b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4" gracePeriod=30 Jan 29 17:08:58 crc kubenswrapper[4886]: I0129 17:08:58.201289 4886 generic.go:334] "Generic (PLEG): container finished" podID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerID="b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4" exitCode=0 Jan 29 17:08:58 crc kubenswrapper[4886]: I0129 17:08:58.201671 4886 generic.go:334] "Generic (PLEG): container finished" podID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerID="6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8" exitCode=2 Jan 29 17:08:58 crc kubenswrapper[4886]: I0129 17:08:58.201360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerDied","Data":"b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4"} Jan 29 17:08:58 crc kubenswrapper[4886]: I0129 17:08:58.201707 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerDied","Data":"6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8"} Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.218159 4886 generic.go:334] "Generic (PLEG): container finished" podID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerID="027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97" exitCode=0 Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.218233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerDied","Data":"027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97"} Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.220090 4886 generic.go:334] "Generic (PLEG): container finished" podID="c467eb7e-a553-4fc5-b366-607a30fe18dd" containerID="b316bbc4bed9ea6d21a1f48ac1daf91a604e958e8664a1c95a0d70b2476abcfa" exitCode=0 Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.220131 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" event={"ID":"c467eb7e-a553-4fc5-b366-607a30fe18dd","Type":"ContainerDied","Data":"b316bbc4bed9ea6d21a1f48ac1daf91a604e958e8664a1c95a0d70b2476abcfa"} Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.661296 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.661384 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.661437 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.662467 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db3893b2fd9096a13f5744612d4a2bcbba80c7ed2ddb6ffa1307348c351b1963"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:08:59 crc kubenswrapper[4886]: I0129 17:08:59.662536 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://db3893b2fd9096a13f5744612d4a2bcbba80c7ed2ddb6ffa1307348c351b1963" gracePeriod=600 Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.234637 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="db3893b2fd9096a13f5744612d4a2bcbba80c7ed2ddb6ffa1307348c351b1963" exitCode=0 Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.234731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"db3893b2fd9096a13f5744612d4a2bcbba80c7ed2ddb6ffa1307348c351b1963"} Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.235023 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d"} Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.235048 4886 scope.go:117] "RemoveContainer" containerID="1ef597c576c05004c5148470ade7ddd51ab3cad8d942f918ff09afb054559dfc" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.675381 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.862761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-scripts\") pod \"c467eb7e-a553-4fc5-b366-607a30fe18dd\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.862868 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97hdc\" (UniqueName: \"kubernetes.io/projected/c467eb7e-a553-4fc5-b366-607a30fe18dd-kube-api-access-97hdc\") pod \"c467eb7e-a553-4fc5-b366-607a30fe18dd\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.862955 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-combined-ca-bundle\") pod \"c467eb7e-a553-4fc5-b366-607a30fe18dd\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.863039 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-config-data\") pod \"c467eb7e-a553-4fc5-b366-607a30fe18dd\" (UID: \"c467eb7e-a553-4fc5-b366-607a30fe18dd\") " Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.870945 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-scripts" (OuterVolumeSpecName: "scripts") pod "c467eb7e-a553-4fc5-b366-607a30fe18dd" (UID: "c467eb7e-a553-4fc5-b366-607a30fe18dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.871135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c467eb7e-a553-4fc5-b366-607a30fe18dd-kube-api-access-97hdc" (OuterVolumeSpecName: "kube-api-access-97hdc") pod "c467eb7e-a553-4fc5-b366-607a30fe18dd" (UID: "c467eb7e-a553-4fc5-b366-607a30fe18dd"). InnerVolumeSpecName "kube-api-access-97hdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.896470 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c467eb7e-a553-4fc5-b366-607a30fe18dd" (UID: "c467eb7e-a553-4fc5-b366-607a30fe18dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.896537 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-config-data" (OuterVolumeSpecName: "config-data") pod "c467eb7e-a553-4fc5-b366-607a30fe18dd" (UID: "c467eb7e-a553-4fc5-b366-607a30fe18dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.966918 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.967209 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97hdc\" (UniqueName: \"kubernetes.io/projected/c467eb7e-a553-4fc5-b366-607a30fe18dd-kube-api-access-97hdc\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.967393 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4886]: I0129 17:09:00.967522 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c467eb7e-a553-4fc5-b366-607a30fe18dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.252767 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" event={"ID":"c467eb7e-a553-4fc5-b366-607a30fe18dd","Type":"ContainerDied","Data":"e030969deba149d036416125fae7ad0b0c1ce2a5efabff4aeea1c2936fb7a1ec"} Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.253101 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e030969deba149d036416125fae7ad0b0c1ce2a5efabff4aeea1c2936fb7a1ec" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.253007 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c4q4z" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.360145 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 17:09:01 crc kubenswrapper[4886]: E0129 17:09:01.360624 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c467eb7e-a553-4fc5-b366-607a30fe18dd" containerName="nova-cell0-conductor-db-sync" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.360642 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c467eb7e-a553-4fc5-b366-607a30fe18dd" containerName="nova-cell0-conductor-db-sync" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.360977 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c467eb7e-a553-4fc5-b366-607a30fe18dd" containerName="nova-cell0-conductor-db-sync" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.361792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.363828 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wcdz5" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.364535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.376308 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/bb22403c-016a-48ea-954a-b7b14ea77d7f-kube-api-access-bc7sn\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.376365 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb22403c-016a-48ea-954a-b7b14ea77d7f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.376472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb22403c-016a-48ea-954a-b7b14ea77d7f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.382254 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.479586 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/bb22403c-016a-48ea-954a-b7b14ea77d7f-kube-api-access-bc7sn\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.479647 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb22403c-016a-48ea-954a-b7b14ea77d7f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.479775 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb22403c-016a-48ea-954a-b7b14ea77d7f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.492007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb22403c-016a-48ea-954a-b7b14ea77d7f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.492399 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb22403c-016a-48ea-954a-b7b14ea77d7f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.501520 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/bb22403c-016a-48ea-954a-b7b14ea77d7f-kube-api-access-bc7sn\") pod \"nova-cell0-conductor-0\" (UID: \"bb22403c-016a-48ea-954a-b7b14ea77d7f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:01 crc kubenswrapper[4886]: I0129 17:09:01.685825 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:02 crc kubenswrapper[4886]: I0129 17:09:02.170951 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 17:09:02 crc kubenswrapper[4886]: I0129 17:09:02.292808 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bb22403c-016a-48ea-954a-b7b14ea77d7f","Type":"ContainerStarted","Data":"9b6695254391d39cd72ea747b0a5494ab7ebb80ca161b9598778aa51d461fb31"} Jan 29 17:09:02 crc kubenswrapper[4886]: I0129 17:09:02.868057 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.041950 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-combined-ca-bundle\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.042045 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-scripts\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.042199 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrz2g\" (UniqueName: \"kubernetes.io/projected/dc82dcdd-793c-4083-9143-1b04037f40d3-kube-api-access-wrz2g\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.042245 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-log-httpd\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.042279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-config-data\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.042386 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-sg-core-conf-yaml\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.042431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-run-httpd\") pod \"dc82dcdd-793c-4083-9143-1b04037f40d3\" (UID: \"dc82dcdd-793c-4083-9143-1b04037f40d3\") " Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.044118 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.044482 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.048801 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-scripts" (OuterVolumeSpecName: "scripts") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.049854 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc82dcdd-793c-4083-9143-1b04037f40d3-kube-api-access-wrz2g" (OuterVolumeSpecName: "kube-api-access-wrz2g") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "kube-api-access-wrz2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.106962 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.146361 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.146402 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.146414 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.146448 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrz2g\" (UniqueName: \"kubernetes.io/projected/dc82dcdd-793c-4083-9143-1b04037f40d3-kube-api-access-wrz2g\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.146461 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc82dcdd-793c-4083-9143-1b04037f40d3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.185960 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.212209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-config-data" (OuterVolumeSpecName: "config-data") pod "dc82dcdd-793c-4083-9143-1b04037f40d3" (UID: "dc82dcdd-793c-4083-9143-1b04037f40d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.249182 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.249226 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc82dcdd-793c-4083-9143-1b04037f40d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.313460 4886 generic.go:334] "Generic (PLEG): container finished" podID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerID="8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20" exitCode=0 Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.313562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerDied","Data":"8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20"} Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.314884 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc82dcdd-793c-4083-9143-1b04037f40d3","Type":"ContainerDied","Data":"17d5fd5f42ae0736004ca73847456c411bd6a9d8d5a5c3344ecb73c5ac5a2736"} Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.314974 4886 scope.go:117] "RemoveContainer" containerID="b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.313603 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.328449 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bb22403c-016a-48ea-954a-b7b14ea77d7f","Type":"ContainerStarted","Data":"c465162f1ad3d58d9d3acd7ece43f775baddecdcd0956b5a30e3866d2383acf1"} Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.330046 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.369588 4886 scope.go:117] "RemoveContainer" containerID="6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.394445 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.394419173 podStartE2EDuration="2.394419173s" podCreationTimestamp="2026-01-29 17:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:03.390884034 +0000 UTC m=+2826.299603306" watchObservedRunningTime="2026-01-29 17:09:03.394419173 +0000 UTC m=+2826.303138445" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.426657 4886 scope.go:117] "RemoveContainer" containerID="027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.452578 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.476788 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.504425 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.505112 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-central-agent" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505141 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-central-agent" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.505163 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="proxy-httpd" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505172 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="proxy-httpd" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.505200 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-notification-agent" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505209 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-notification-agent" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.505255 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="sg-core" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505264 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="sg-core" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505572 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-notification-agent" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505607 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="sg-core" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505641 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="proxy-httpd" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.505659 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" containerName="ceilometer-central-agent" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.523962 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.524123 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.530853 4886 scope.go:117] "RemoveContainer" containerID="8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.531094 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.531177 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.587542 4886 scope.go:117] "RemoveContainer" containerID="b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.588797 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4\": container with ID starting with b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4 not found: ID does not exist" containerID="b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.588854 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4"} err="failed to get container status \"b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4\": rpc error: code = NotFound desc = could not find container \"b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4\": container with ID starting with b4bbd9c439d2c24659fb57b3faf885aaff4aa720b408e45a5289e66ac74560d4 not found: ID does not exist" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.588889 4886 scope.go:117] "RemoveContainer" containerID="6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.589881 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8\": container with ID starting with 6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8 not found: ID does not exist" containerID="6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.589926 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8"} err="failed to get container status \"6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8\": rpc error: code = NotFound desc = could not find container \"6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8\": container with ID starting with 6549cee8bc993f3edbbbdedce8da615b537aaf75fc4fbbffe8a146e13427c8c8 not found: ID does not exist" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.589992 4886 scope.go:117] "RemoveContainer" containerID="027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.590459 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97\": container with ID starting with 027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97 not found: ID does not exist" containerID="027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.590490 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97"} err="failed to get container status \"027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97\": rpc error: code = NotFound desc = could not find container \"027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97\": container with ID starting with 027f2f6b9a90551af8155e3f9d55caa5b15fe881b17a34fbffe2e1da19cdee97 not found: ID does not exist" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.590510 4886 scope.go:117] "RemoveContainer" containerID="8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20" Jan 29 17:09:03 crc kubenswrapper[4886]: E0129 17:09:03.591680 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20\": container with ID starting with 8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20 not found: ID does not exist" containerID="8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.591731 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20"} err="failed to get container status \"8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20\": rpc error: code = NotFound desc = could not find container \"8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20\": container with ID starting with 8637ee0b12535652fad4c6c24b400526b4e4e5a64b9711598c8207164cbe4a20 not found: ID does not exist" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.670563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-run-httpd\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.670633 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-config-data\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.670723 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-scripts\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.670817 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.670917 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.671067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-log-httpd\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.671120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk52z\" (UniqueName: \"kubernetes.io/projected/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-kube-api-access-fk52z\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.772849 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-run-httpd\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.772908 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-config-data\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.772967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-scripts\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.773002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.773030 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.773105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-log-httpd\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.773126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk52z\" (UniqueName: \"kubernetes.io/projected/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-kube-api-access-fk52z\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.773370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-run-httpd\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.773741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-log-httpd\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.778018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.778168 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-scripts\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.781710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.782903 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-config-data\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.790406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk52z\" (UniqueName: \"kubernetes.io/projected/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-kube-api-access-fk52z\") pod \"ceilometer-0\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " pod="openstack/ceilometer-0" Jan 29 17:09:03 crc kubenswrapper[4886]: I0129 17:09:03.860388 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:04 crc kubenswrapper[4886]: I0129 17:09:04.410819 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:04 crc kubenswrapper[4886]: I0129 17:09:04.630189 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc82dcdd-793c-4083-9143-1b04037f40d3" path="/var/lib/kubelet/pods/dc82dcdd-793c-4083-9143-1b04037f40d3/volumes" Jan 29 17:09:05 crc kubenswrapper[4886]: I0129 17:09:05.354699 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerStarted","Data":"f28b9a9b2e33861b2b8937e8a0acf07992031f2291a0da6c8fc53223704d8f50"} Jan 29 17:09:05 crc kubenswrapper[4886]: I0129 17:09:05.355287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerStarted","Data":"ee2c96cf4752f271ab59c1e5d9ef8010edcb2061ecccd36a34d602bf9c8f1068"} Jan 29 17:09:06 crc kubenswrapper[4886]: I0129 17:09:06.368644 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerStarted","Data":"8ed383dcd150e84a715deaf0b080e1c2f8bb3800fd02ff47edc2c3516be536cf"} Jan 29 17:09:07 crc kubenswrapper[4886]: I0129 17:09:07.384475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerStarted","Data":"1d04206c0d41b909492932943b574fcef26ed1b2dfcf90d669a67515dcaabab7"} Jan 29 17:09:09 crc kubenswrapper[4886]: I0129 17:09:09.414138 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerStarted","Data":"b8916a65aaeb4f4e843c5fba061a08311e52e99052d791e323ff6941a73b7589"} Jan 29 17:09:09 crc kubenswrapper[4886]: I0129 17:09:09.415189 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:09:09 crc kubenswrapper[4886]: I0129 17:09:09.450730 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.200388919 podStartE2EDuration="6.450711102s" podCreationTimestamp="2026-01-29 17:09:03 +0000 UTC" firstStartedPulling="2026-01-29 17:09:04.400739776 +0000 UTC m=+2827.309459048" lastFinishedPulling="2026-01-29 17:09:08.651061959 +0000 UTC m=+2831.559781231" observedRunningTime="2026-01-29 17:09:09.450047733 +0000 UTC m=+2832.358767055" watchObservedRunningTime="2026-01-29 17:09:09.450711102 +0000 UTC m=+2832.359430584" Jan 29 17:09:11 crc kubenswrapper[4886]: I0129 17:09:11.728046 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.514196 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-tqcf4"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.517806 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.522859 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.523098 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.535096 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqcf4"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.599050 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-config-data\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.599105 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-scripts\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.599150 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhvmq\" (UniqueName: \"kubernetes.io/projected/8cabf586-398a-45a9-80d6-2fd63d9e14e5-kube-api-access-vhvmq\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.599379 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.664484 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.666318 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.683852 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.685400 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.701159 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.701276 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-config-data\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.701302 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-scripts\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.701368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhvmq\" (UniqueName: \"kubernetes.io/projected/8cabf586-398a-45a9-80d6-2fd63d9e14e5-kube-api-access-vhvmq\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.716267 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.718303 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.722035 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.730833 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-scripts\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.735529 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.742086 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.768690 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.770185 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.783985 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.800903 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-config-data\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804389 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63670887-1250-42df-a728-315414be9901-logs\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804492 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804783 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqq5\" (UniqueName: \"kubernetes.io/projected/63670887-1250-42df-a728-315414be9901-kube-api-access-frqq5\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804863 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-config-data\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-config-data\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804952 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sdhg\" (UniqueName: \"kubernetes.io/projected/c24e1f4d-2c34-4496-bd90-4fe840552491-kube-api-access-5sdhg\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.804968 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24e1f4d-2c34-4496-bd90-4fe840552491-logs\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.810438 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhvmq\" (UniqueName: \"kubernetes.io/projected/8cabf586-398a-45a9-80d6-2fd63d9e14e5-kube-api-access-vhvmq\") pod \"nova-cell0-cell-mapping-tqcf4\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.821249 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.843221 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908173 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-config-data\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908246 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908265 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblx2\" (UniqueName: \"kubernetes.io/projected/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-kube-api-access-dblx2\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63670887-1250-42df-a728-315414be9901-logs\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqq5\" (UniqueName: \"kubernetes.io/projected/63670887-1250-42df-a728-315414be9901-kube-api-access-frqq5\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-config-data\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908458 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-config-data\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908476 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sdhg\" (UniqueName: \"kubernetes.io/projected/c24e1f4d-2c34-4496-bd90-4fe840552491-kube-api-access-5sdhg\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908492 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24e1f4d-2c34-4496-bd90-4fe840552491-logs\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.908511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.909945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63670887-1250-42df-a728-315414be9901-logs\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.910763 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24e1f4d-2c34-4496-bd90-4fe840552491-logs\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.937148 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.944893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-config-data\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.944966 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-config-data\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.957083 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sdhg\" (UniqueName: \"kubernetes.io/projected/c24e1f4d-2c34-4496-bd90-4fe840552491-kube-api-access-5sdhg\") pod \"nova-api-0\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " pod="openstack/nova-api-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.959972 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zdbgk"] Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.961796 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.968077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqq5\" (UniqueName: \"kubernetes.io/projected/63670887-1250-42df-a728-315414be9901-kube-api-access-frqq5\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:12 crc kubenswrapper[4886]: I0129 17:09:12.968202 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"63670887-1250-42df-a728-315414be9901\") " pod="openstack/nova-metadata-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.002369 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zdbgk"] Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.011282 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblx2\" (UniqueName: \"kubernetes.io/projected/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-kube-api-access-dblx2\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.011505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.011558 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-config-data\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.036260 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.038184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-config-data\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.046432 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblx2\" (UniqueName: \"kubernetes.io/projected/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-kube-api-access-dblx2\") pod \"nova-scheduler-0\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.049392 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.051304 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.052416 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.083519 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.092938 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.107673 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.113363 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9csz\" (UniqueName: \"kubernetes.io/projected/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-kube-api-access-x9csz\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.113427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-svc\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.113492 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-config\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.113530 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.113553 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.113592 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.122780 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.216213 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9csz\" (UniqueName: \"kubernetes.io/projected/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-kube-api-access-x9csz\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.216721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-svc\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.216880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.216913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.217040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-config\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.217137 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.217194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.217263 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.217336 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmffz\" (UniqueName: \"kubernetes.io/projected/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-kube-api-access-rmffz\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.217777 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-svc\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.219195 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.220008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.220076 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-config\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.220391 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.268814 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9csz\" (UniqueName: \"kubernetes.io/projected/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-kube-api-access-x9csz\") pod \"dnsmasq-dns-9b86998b5-zdbgk\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.319820 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmffz\" (UniqueName: \"kubernetes.io/projected/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-kube-api-access-rmffz\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.320013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.320040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.326762 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.332192 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.344520 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmffz\" (UniqueName: \"kubernetes.io/projected/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-kube-api-access-rmffz\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.434303 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.442614 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.822398 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqcf4"] Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.913565 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:13 crc kubenswrapper[4886]: I0129 17:09:13.957151 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.112855 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.344340 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:14 crc kubenswrapper[4886]: W0129 17:09:14.346512 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ccf7a7a_f65b_4942_9bfa_bc7a377e6ff1.slice/crio-f636861581833a86368762de32a4ca62df7734738d06a2800f3b6b0ee4fb4aa1 WatchSource:0}: Error finding container f636861581833a86368762de32a4ca62df7734738d06a2800f3b6b0ee4fb4aa1: Status 404 returned error can't find the container with id f636861581833a86368762de32a4ca62df7734738d06a2800f3b6b0ee4fb4aa1 Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.367794 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zdbgk"] Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.494937 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24e1f4d-2c34-4496-bd90-4fe840552491","Type":"ContainerStarted","Data":"eb8a3baac4fbd0a80179f8a19f3f61fb9fca2e4d5dcfe096915c43ef69238e98"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.520895 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqcf4" event={"ID":"8cabf586-398a-45a9-80d6-2fd63d9e14e5","Type":"ContainerStarted","Data":"d6960d602147a760f370e0aaeba322f8c53999b050075e5ef6c33ecafc0b7928"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.520936 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqcf4" event={"ID":"8cabf586-398a-45a9-80d6-2fd63d9e14e5","Type":"ContainerStarted","Data":"c9ea59738c6ba35a7c3d3e2f05ce7750bd7b76ba456616dc38cec147840a905e"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.528240 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63670887-1250-42df-a728-315414be9901","Type":"ContainerStarted","Data":"54233804a9ed5dc337d2e33b8c617c4a33e85a8e6af923aaf251e6cf9186b374"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.530551 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3441bcd4-bf8b-406f-b3f5-1c723908bdc4","Type":"ContainerStarted","Data":"95891069401cb7e43c836c472c728a63f5e1133c6a2287df2be68780c76d5016"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.541527 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" event={"ID":"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1","Type":"ContainerStarted","Data":"f636861581833a86368762de32a4ca62df7734738d06a2800f3b6b0ee4fb4aa1"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.543719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11","Type":"ContainerStarted","Data":"b9417b27c0621c2b043b290e7d29fbfb8ed923b29824c45f4941d5924a3fcf00"} Jan 29 17:09:14 crc kubenswrapper[4886]: I0129 17:09:14.554702 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-tqcf4" podStartSLOduration=2.554683149 podStartE2EDuration="2.554683149s" podCreationTimestamp="2026-01-29 17:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:14.546492658 +0000 UTC m=+2837.455211940" watchObservedRunningTime="2026-01-29 17:09:14.554683149 +0000 UTC m=+2837.463402421" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.107206 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fznz7"] Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.111738 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.115155 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.115621 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.124135 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fznz7"] Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.172707 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78gf\" (UniqueName: \"kubernetes.io/projected/a88a08b7-d54a-4414-b7f6-b490949d6b70-kube-api-access-n78gf\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.173107 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-scripts\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.173333 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-config-data\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.173509 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.275669 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-config-data\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.275814 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.275860 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78gf\" (UniqueName: \"kubernetes.io/projected/a88a08b7-d54a-4414-b7f6-b490949d6b70-kube-api-access-n78gf\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.276018 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-scripts\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.283102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.284480 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-config-data\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.286017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-scripts\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.336119 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78gf\" (UniqueName: \"kubernetes.io/projected/a88a08b7-d54a-4414-b7f6-b490949d6b70-kube-api-access-n78gf\") pod \"nova-cell1-conductor-db-sync-fznz7\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.440836 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.558177 4886 generic.go:334] "Generic (PLEG): container finished" podID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerID="8bfd8a8fe8f520c0bdd3a5164fe133a10f3e76f19d1c34103c42b1d9ab4fdfeb" exitCode=0 Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.560436 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" event={"ID":"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1","Type":"ContainerDied","Data":"8bfd8a8fe8f520c0bdd3a5164fe133a10f3e76f19d1c34103c42b1d9ab4fdfeb"} Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.703402 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.703980 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-central-agent" containerID="cri-o://f28b9a9b2e33861b2b8937e8a0acf07992031f2291a0da6c8fc53223704d8f50" gracePeriod=30 Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.704607 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="proxy-httpd" containerID="cri-o://b8916a65aaeb4f4e843c5fba061a08311e52e99052d791e323ff6941a73b7589" gracePeriod=30 Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.704670 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="sg-core" containerID="cri-o://1d04206c0d41b909492932943b574fcef26ed1b2dfcf90d669a67515dcaabab7" gracePeriod=30 Jan 29 17:09:15 crc kubenswrapper[4886]: I0129 17:09:15.704749 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-notification-agent" containerID="cri-o://8ed383dcd150e84a715deaf0b080e1c2f8bb3800fd02ff47edc2c3516be536cf" gracePeriod=30 Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.108565 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fznz7"] Jan 29 17:09:16 crc kubenswrapper[4886]: W0129 17:09:16.115946 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda88a08b7_d54a_4414_b7f6_b490949d6b70.slice/crio-0f300c9b5b26753aaff19219c045a650f2a2a1dbd8aa16dd9736b14b2cbcde2c WatchSource:0}: Error finding container 0f300c9b5b26753aaff19219c045a650f2a2a1dbd8aa16dd9736b14b2cbcde2c: Status 404 returned error can't find the container with id 0f300c9b5b26753aaff19219c045a650f2a2a1dbd8aa16dd9736b14b2cbcde2c Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.421731 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.442125 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.579832 4886 generic.go:334] "Generic (PLEG): container finished" podID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerID="b8916a65aaeb4f4e843c5fba061a08311e52e99052d791e323ff6941a73b7589" exitCode=0 Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.579889 4886 generic.go:334] "Generic (PLEG): container finished" podID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerID="1d04206c0d41b909492932943b574fcef26ed1b2dfcf90d669a67515dcaabab7" exitCode=2 Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.579903 4886 generic.go:334] "Generic (PLEG): container finished" podID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerID="8ed383dcd150e84a715deaf0b080e1c2f8bb3800fd02ff47edc2c3516be536cf" exitCode=0 Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.579972 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerDied","Data":"b8916a65aaeb4f4e843c5fba061a08311e52e99052d791e323ff6941a73b7589"} Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.580005 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerDied","Data":"1d04206c0d41b909492932943b574fcef26ed1b2dfcf90d669a67515dcaabab7"} Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.580020 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerDied","Data":"8ed383dcd150e84a715deaf0b080e1c2f8bb3800fd02ff47edc2c3516be536cf"} Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.581819 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" event={"ID":"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1","Type":"ContainerStarted","Data":"18dccc69ea12ffd53b4d4c8e312d9e5ee415348aafbce21b941019b15077a6b6"} Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.581947 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.583324 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fznz7" event={"ID":"a88a08b7-d54a-4414-b7f6-b490949d6b70","Type":"ContainerStarted","Data":"b0c7be4a8a6f220b0bc62ecd7ce7d07cb8b17e5644962c70a9a466af1717c6ce"} Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.583475 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fznz7" event={"ID":"a88a08b7-d54a-4414-b7f6-b490949d6b70","Type":"ContainerStarted","Data":"0f300c9b5b26753aaff19219c045a650f2a2a1dbd8aa16dd9736b14b2cbcde2c"} Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.608222 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" podStartSLOduration=4.6082033970000005 podStartE2EDuration="4.608203397s" podCreationTimestamp="2026-01-29 17:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:16.603805133 +0000 UTC m=+2839.512524405" watchObservedRunningTime="2026-01-29 17:09:16.608203397 +0000 UTC m=+2839.516922669" Jan 29 17:09:16 crc kubenswrapper[4886]: I0129 17:09:16.651770 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fznz7" podStartSLOduration=1.651750424 podStartE2EDuration="1.651750424s" podCreationTimestamp="2026-01-29 17:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:16.635740903 +0000 UTC m=+2839.544460175" watchObservedRunningTime="2026-01-29 17:09:16.651750424 +0000 UTC m=+2839.560469696" Jan 29 17:09:19 crc kubenswrapper[4886]: I0129 17:09:19.680518 4886 generic.go:334] "Generic (PLEG): container finished" podID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerID="f28b9a9b2e33861b2b8937e8a0acf07992031f2291a0da6c8fc53223704d8f50" exitCode=0 Jan 29 17:09:19 crc kubenswrapper[4886]: I0129 17:09:19.681111 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerDied","Data":"f28b9a9b2e33861b2b8937e8a0acf07992031f2291a0da6c8fc53223704d8f50"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.010180 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045110 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-sg-core-conf-yaml\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045236 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk52z\" (UniqueName: \"kubernetes.io/projected/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-kube-api-access-fk52z\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045268 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-run-httpd\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045287 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-config-data\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045359 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-combined-ca-bundle\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045464 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-log-httpd\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.045510 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-scripts\") pod \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\" (UID: \"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c\") " Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.047659 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.052015 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.078241 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-kube-api-access-fk52z" (OuterVolumeSpecName: "kube-api-access-fk52z") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "kube-api-access-fk52z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.157299 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk52z\" (UniqueName: \"kubernetes.io/projected/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-kube-api-access-fk52z\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.157359 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.157371 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.273155 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-scripts" (OuterVolumeSpecName: "scripts") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.279005 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.362269 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.365292 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.395445 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.457637 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-config-data" (OuterVolumeSpecName: "config-data") pod "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" (UID: "e5fe8f3b-ae29-4a3c-be7a-a645f94d226c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.468088 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.468189 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.700609 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.700787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5fe8f3b-ae29-4a3c-be7a-a645f94d226c","Type":"ContainerDied","Data":"ee2c96cf4752f271ab59c1e5d9ef8010edcb2061ecccd36a34d602bf9c8f1068"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.701302 4886 scope.go:117] "RemoveContainer" containerID="b8916a65aaeb4f4e843c5fba061a08311e52e99052d791e323ff6941a73b7589" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.703902 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3441bcd4-bf8b-406f-b3f5-1c723908bdc4","Type":"ContainerStarted","Data":"8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.710978 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11","Type":"ContainerStarted","Data":"c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.711129 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130" gracePeriod=30 Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.716602 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24e1f4d-2c34-4496-bd90-4fe840552491","Type":"ContainerStarted","Data":"9ac610ed30cb05a5e2e84f376b3dae669cc45f85e6a0aacf8442be252f9695ce"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.716655 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24e1f4d-2c34-4496-bd90-4fe840552491","Type":"ContainerStarted","Data":"b24f4f5a92565d88d3fd3da1badf8b5f1cb84c27bbc9afb1415ec3f58dd94565"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.725761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63670887-1250-42df-a728-315414be9901","Type":"ContainerStarted","Data":"2706075df7ed398bfa86a5019c0c0b891534965545aed4044f6858df83babfa9"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.725820 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63670887-1250-42df-a728-315414be9901","Type":"ContainerStarted","Data":"3a64bd79066ba13789ce6be118a26c29652e1e5c788ad39a1b41f13dad0dd1c1"} Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.725987 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-log" containerID="cri-o://3a64bd79066ba13789ce6be118a26c29652e1e5c788ad39a1b41f13dad0dd1c1" gracePeriod=30 Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.726252 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-metadata" containerID="cri-o://2706075df7ed398bfa86a5019c0c0b891534965545aed4044f6858df83babfa9" gracePeriod=30 Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.731446 4886 scope.go:117] "RemoveContainer" containerID="1d04206c0d41b909492932943b574fcef26ed1b2dfcf90d669a67515dcaabab7" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.747727 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.384371438 podStartE2EDuration="8.747689038s" podCreationTimestamp="2026-01-29 17:09:12 +0000 UTC" firstStartedPulling="2026-01-29 17:09:14.125761628 +0000 UTC m=+2837.034480900" lastFinishedPulling="2026-01-29 17:09:19.489079228 +0000 UTC m=+2842.397798500" observedRunningTime="2026-01-29 17:09:20.72644403 +0000 UTC m=+2843.635163332" watchObservedRunningTime="2026-01-29 17:09:20.747689038 +0000 UTC m=+2843.656408310" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763036 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-6zh6p"] Jan 29 17:09:20 crc kubenswrapper[4886]: E0129 17:09:20.763479 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-notification-agent" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763498 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-notification-agent" Jan 29 17:09:20 crc kubenswrapper[4886]: E0129 17:09:20.763525 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="sg-core" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763533 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="sg-core" Jan 29 17:09:20 crc kubenswrapper[4886]: E0129 17:09:20.763559 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-central-agent" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763567 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-central-agent" Jan 29 17:09:20 crc kubenswrapper[4886]: E0129 17:09:20.763595 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="proxy-httpd" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763601 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="proxy-httpd" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763908 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-notification-agent" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763928 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="sg-core" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763940 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="ceilometer-central-agent" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.763958 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" containerName="proxy-httpd" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.765020 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.807209 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-6zh6p"] Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.822773 4886 scope.go:117] "RemoveContainer" containerID="8ed383dcd150e84a715deaf0b080e1c2f8bb3800fd02ff47edc2c3516be536cf" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.829870 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.309337944 podStartE2EDuration="8.829849272s" podCreationTimestamp="2026-01-29 17:09:12 +0000 UTC" firstStartedPulling="2026-01-29 17:09:13.967029437 +0000 UTC m=+2836.875748709" lastFinishedPulling="2026-01-29 17:09:19.487540765 +0000 UTC m=+2842.396260037" observedRunningTime="2026-01-29 17:09:20.751748883 +0000 UTC m=+2843.660468155" watchObservedRunningTime="2026-01-29 17:09:20.829849272 +0000 UTC m=+2843.738568544" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.849812 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.329726759 podStartE2EDuration="8.849791324s" podCreationTimestamp="2026-01-29 17:09:12 +0000 UTC" firstStartedPulling="2026-01-29 17:09:13.966091071 +0000 UTC m=+2836.874810343" lastFinishedPulling="2026-01-29 17:09:19.486155636 +0000 UTC m=+2842.394874908" observedRunningTime="2026-01-29 17:09:20.786768399 +0000 UTC m=+2843.695487681" watchObservedRunningTime="2026-01-29 17:09:20.849791324 +0000 UTC m=+2843.758510596" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.880027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323a490d-33e2-4411-8a77-c578f409ba28-operator-scripts\") pod \"aodh-db-create-6zh6p\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.880091 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mldf\" (UniqueName: \"kubernetes.io/projected/323a490d-33e2-4411-8a77-c578f409ba28-kube-api-access-5mldf\") pod \"aodh-db-create-6zh6p\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.928700 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.933671 4886 scope.go:117] "RemoveContainer" containerID="f28b9a9b2e33861b2b8937e8a0acf07992031f2291a0da6c8fc53223704d8f50" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.960088 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.973554 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-60d5-account-create-update-w67hv"] Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.976943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.983390 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323a490d-33e2-4411-8a77-c578f409ba28-operator-scripts\") pod \"aodh-db-create-6zh6p\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.983466 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mldf\" (UniqueName: \"kubernetes.io/projected/323a490d-33e2-4411-8a77-c578f409ba28-kube-api-access-5mldf\") pod \"aodh-db-create-6zh6p\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.984720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323a490d-33e2-4411-8a77-c578f409ba28-operator-scripts\") pod \"aodh-db-create-6zh6p\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:20 crc kubenswrapper[4886]: I0129 17:09:20.994833 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.000781 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-60d5-account-create-update-w67hv"] Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.005870 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.854569901 podStartE2EDuration="9.00585373s" podCreationTimestamp="2026-01-29 17:09:12 +0000 UTC" firstStartedPulling="2026-01-29 17:09:14.33957675 +0000 UTC m=+2837.248296022" lastFinishedPulling="2026-01-29 17:09:19.490860579 +0000 UTC m=+2842.399579851" observedRunningTime="2026-01-29 17:09:20.853581191 +0000 UTC m=+2843.762300473" watchObservedRunningTime="2026-01-29 17:09:21.00585373 +0000 UTC m=+2843.914573002" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.014067 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mldf\" (UniqueName: \"kubernetes.io/projected/323a490d-33e2-4411-8a77-c578f409ba28-kube-api-access-5mldf\") pod \"aodh-db-create-6zh6p\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.045384 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.048299 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.057644 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.057871 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.058634 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6f2462-b78d-4619-9704-5cc67ae60974-operator-scripts\") pod \"aodh-60d5-account-create-update-w67hv\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-log-httpd\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086605 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-run-httpd\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086621 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbzsd\" (UniqueName: \"kubernetes.io/projected/ec6f2462-b78d-4619-9704-5cc67ae60974-kube-api-access-sbzsd\") pod \"aodh-60d5-account-create-update-w67hv\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086657 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-scripts\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086719 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk8j\" (UniqueName: \"kubernetes.io/projected/295921c4-07ca-4972-a4fa-0a64f46855ec-kube-api-access-wjk8j\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.086737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-config-data\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.115015 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205115 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjk8j\" (UniqueName: \"kubernetes.io/projected/295921c4-07ca-4972-a4fa-0a64f46855ec-kube-api-access-wjk8j\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205173 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-config-data\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205311 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6f2462-b78d-4619-9704-5cc67ae60974-operator-scripts\") pod \"aodh-60d5-account-create-update-w67hv\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205386 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-log-httpd\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205411 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205558 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-run-httpd\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbzsd\" (UniqueName: \"kubernetes.io/projected/ec6f2462-b78d-4619-9704-5cc67ae60974-kube-api-access-sbzsd\") pod \"aodh-60d5-account-create-update-w67hv\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.205634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-scripts\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.206510 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6f2462-b78d-4619-9704-5cc67ae60974-operator-scripts\") pod \"aodh-60d5-account-create-update-w67hv\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.210846 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-run-httpd\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.211059 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-log-httpd\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.221337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-config-data\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.223065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.225410 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.226165 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-scripts\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.226934 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjk8j\" (UniqueName: \"kubernetes.io/projected/295921c4-07ca-4972-a4fa-0a64f46855ec-kube-api-access-wjk8j\") pod \"ceilometer-0\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.262813 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbzsd\" (UniqueName: \"kubernetes.io/projected/ec6f2462-b78d-4619-9704-5cc67ae60974-kube-api-access-sbzsd\") pod \"aodh-60d5-account-create-update-w67hv\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.297903 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.406679 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.648539 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-6zh6p"] Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.756364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-6zh6p" event={"ID":"323a490d-33e2-4411-8a77-c578f409ba28","Type":"ContainerStarted","Data":"e0ac0de75a6d66b5b0eab6f8b648695440128eefa4a612dc2e8eeb54837d3d6c"} Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.761945 4886 generic.go:334] "Generic (PLEG): container finished" podID="63670887-1250-42df-a728-315414be9901" containerID="3a64bd79066ba13789ce6be118a26c29652e1e5c788ad39a1b41f13dad0dd1c1" exitCode=143 Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.761995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63670887-1250-42df-a728-315414be9901","Type":"ContainerDied","Data":"3a64bd79066ba13789ce6be118a26c29652e1e5c788ad39a1b41f13dad0dd1c1"} Jan 29 17:09:21 crc kubenswrapper[4886]: I0129 17:09:21.845955 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-60d5-account-create-update-w67hv"] Jan 29 17:09:22 crc kubenswrapper[4886]: W0129 17:09:22.176503 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod295921c4_07ca_4972_a4fa_0a64f46855ec.slice/crio-a53c80ed86f57307186bc127fbed1c995aed2de96e312e93825a7c90882f5022 WatchSource:0}: Error finding container a53c80ed86f57307186bc127fbed1c995aed2de96e312e93825a7c90882f5022: Status 404 returned error can't find the container with id a53c80ed86f57307186bc127fbed1c995aed2de96e312e93825a7c90882f5022 Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.177656 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.627756 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5fe8f3b-ae29-4a3c-be7a-a645f94d226c" path="/var/lib/kubelet/pods/e5fe8f3b-ae29-4a3c-be7a-a645f94d226c/volumes" Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.780010 4886 generic.go:334] "Generic (PLEG): container finished" podID="ec6f2462-b78d-4619-9704-5cc67ae60974" containerID="94c431dc7f3dd6c3f091efc6b5f4191b950083388e1ef0390fd70fcd7a85128c" exitCode=0 Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.780087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-60d5-account-create-update-w67hv" event={"ID":"ec6f2462-b78d-4619-9704-5cc67ae60974","Type":"ContainerDied","Data":"94c431dc7f3dd6c3f091efc6b5f4191b950083388e1ef0390fd70fcd7a85128c"} Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.780119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-60d5-account-create-update-w67hv" event={"ID":"ec6f2462-b78d-4619-9704-5cc67ae60974","Type":"ContainerStarted","Data":"5d3677942b9ad8cac08ad6a8040413f4a4dafcf1a3ca405fc940d518718d37c9"} Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.784374 4886 generic.go:334] "Generic (PLEG): container finished" podID="323a490d-33e2-4411-8a77-c578f409ba28" containerID="2e1c0eadae73024c2cb0f70a58a6f4f7d1a81518c1e179c7358b1ee70d254152" exitCode=0 Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.784520 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-6zh6p" event={"ID":"323a490d-33e2-4411-8a77-c578f409ba28","Type":"ContainerDied","Data":"2e1c0eadae73024c2cb0f70a58a6f4f7d1a81518c1e179c7358b1ee70d254152"} Jan 29 17:09:22 crc kubenswrapper[4886]: I0129 17:09:22.786057 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerStarted","Data":"a53c80ed86f57307186bc127fbed1c995aed2de96e312e93825a7c90882f5022"} Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.032896 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.054528 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.109139 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.109194 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.126514 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.126549 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.167574 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.436486 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.444113 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.577873 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-btn45"] Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.578482 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="dnsmasq-dns" containerID="cri-o://d9ab37d44f372064ee89522913b27477d9c2a6f3f0efeec33809e585d943fe38" gracePeriod=10 Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.820272 4886 generic.go:334] "Generic (PLEG): container finished" podID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerID="d9ab37d44f372064ee89522913b27477d9c2a6f3f0efeec33809e585d943fe38" exitCode=0 Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.820607 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" event={"ID":"da76d93d-7c2d-485e-b5e0-229f4254d74b","Type":"ContainerDied","Data":"d9ab37d44f372064ee89522913b27477d9c2a6f3f0efeec33809e585d943fe38"} Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.823793 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerStarted","Data":"0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707"} Jan 29 17:09:23 crc kubenswrapper[4886]: I0129 17:09:23.874209 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.114593 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.114592 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.515236 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.675067 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-sb\") pod \"da76d93d-7c2d-485e-b5e0-229f4254d74b\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.675186 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6k7c\" (UniqueName: \"kubernetes.io/projected/da76d93d-7c2d-485e-b5e0-229f4254d74b-kube-api-access-m6k7c\") pod \"da76d93d-7c2d-485e-b5e0-229f4254d74b\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.675464 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-config\") pod \"da76d93d-7c2d-485e-b5e0-229f4254d74b\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.675540 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-svc\") pod \"da76d93d-7c2d-485e-b5e0-229f4254d74b\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.675573 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-swift-storage-0\") pod \"da76d93d-7c2d-485e-b5e0-229f4254d74b\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.675654 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-nb\") pod \"da76d93d-7c2d-485e-b5e0-229f4254d74b\" (UID: \"da76d93d-7c2d-485e-b5e0-229f4254d74b\") " Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.752691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da76d93d-7c2d-485e-b5e0-229f4254d74b-kube-api-access-m6k7c" (OuterVolumeSpecName: "kube-api-access-m6k7c") pod "da76d93d-7c2d-485e-b5e0-229f4254d74b" (UID: "da76d93d-7c2d-485e-b5e0-229f4254d74b"). InnerVolumeSpecName "kube-api-access-m6k7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.822887 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6k7c\" (UniqueName: \"kubernetes.io/projected/da76d93d-7c2d-485e-b5e0-229f4254d74b-kube-api-access-m6k7c\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.891799 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.895074 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-config" (OuterVolumeSpecName: "config") pod "da76d93d-7c2d-485e-b5e0-229f4254d74b" (UID: "da76d93d-7c2d-485e-b5e0-229f4254d74b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.906143 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerStarted","Data":"35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529"} Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.906854 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" event={"ID":"da76d93d-7c2d-485e-b5e0-229f4254d74b","Type":"ContainerDied","Data":"bfc495e69c05d32911e1c19e2fff095c3d4fca06c566554a8f30f63272e3f284"} Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.906905 4886 scope.go:117] "RemoveContainer" containerID="d9ab37d44f372064ee89522913b27477d9c2a6f3f0efeec33809e585d943fe38" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.915816 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-60d5-account-create-update-w67hv" event={"ID":"ec6f2462-b78d-4619-9704-5cc67ae60974","Type":"ContainerDied","Data":"5d3677942b9ad8cac08ad6a8040413f4a4dafcf1a3ca405fc940d518718d37c9"} Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.915874 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d3677942b9ad8cac08ad6a8040413f4a4dafcf1a3ca405fc940d518718d37c9" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.930622 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.939855 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da76d93d-7c2d-485e-b5e0-229f4254d74b" (UID: "da76d93d-7c2d-485e-b5e0-229f4254d74b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.944311 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da76d93d-7c2d-485e-b5e0-229f4254d74b" (UID: "da76d93d-7c2d-485e-b5e0-229f4254d74b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.958981 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.965747 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.965842 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da76d93d-7c2d-485e-b5e0-229f4254d74b" (UID: "da76d93d-7c2d-485e-b5e0-229f4254d74b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.965956 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da76d93d-7c2d-485e-b5e0-229f4254d74b" (UID: "da76d93d-7c2d-485e-b5e0-229f4254d74b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4886]: I0129 17:09:24.973117 4886 scope.go:117] "RemoveContainer" containerID="aecb755c349be6f445700545d32b2d2a1cceeb8e44ce0b32e7f93655d8a60679" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.034375 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.034451 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.034465 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.034505 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da76d93d-7c2d-485e-b5e0-229f4254d74b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.137025 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mldf\" (UniqueName: \"kubernetes.io/projected/323a490d-33e2-4411-8a77-c578f409ba28-kube-api-access-5mldf\") pod \"323a490d-33e2-4411-8a77-c578f409ba28\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.137089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbzsd\" (UniqueName: \"kubernetes.io/projected/ec6f2462-b78d-4619-9704-5cc67ae60974-kube-api-access-sbzsd\") pod \"ec6f2462-b78d-4619-9704-5cc67ae60974\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.137262 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6f2462-b78d-4619-9704-5cc67ae60974-operator-scripts\") pod \"ec6f2462-b78d-4619-9704-5cc67ae60974\" (UID: \"ec6f2462-b78d-4619-9704-5cc67ae60974\") " Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.137459 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323a490d-33e2-4411-8a77-c578f409ba28-operator-scripts\") pod \"323a490d-33e2-4411-8a77-c578f409ba28\" (UID: \"323a490d-33e2-4411-8a77-c578f409ba28\") " Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.138363 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6f2462-b78d-4619-9704-5cc67ae60974-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec6f2462-b78d-4619-9704-5cc67ae60974" (UID: "ec6f2462-b78d-4619-9704-5cc67ae60974"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.138531 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a490d-33e2-4411-8a77-c578f409ba28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "323a490d-33e2-4411-8a77-c578f409ba28" (UID: "323a490d-33e2-4411-8a77-c578f409ba28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.141549 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6f2462-b78d-4619-9704-5cc67ae60974-kube-api-access-sbzsd" (OuterVolumeSpecName: "kube-api-access-sbzsd") pod "ec6f2462-b78d-4619-9704-5cc67ae60974" (UID: "ec6f2462-b78d-4619-9704-5cc67ae60974"). InnerVolumeSpecName "kube-api-access-sbzsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.141728 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323a490d-33e2-4411-8a77-c578f409ba28-kube-api-access-5mldf" (OuterVolumeSpecName: "kube-api-access-5mldf") pod "323a490d-33e2-4411-8a77-c578f409ba28" (UID: "323a490d-33e2-4411-8a77-c578f409ba28"). InnerVolumeSpecName "kube-api-access-5mldf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.237846 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-btn45"] Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.239180 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6f2462-b78d-4619-9704-5cc67ae60974-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.239206 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323a490d-33e2-4411-8a77-c578f409ba28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.239218 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mldf\" (UniqueName: \"kubernetes.io/projected/323a490d-33e2-4411-8a77-c578f409ba28-kube-api-access-5mldf\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.239229 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbzsd\" (UniqueName: \"kubernetes.io/projected/ec6f2462-b78d-4619-9704-5cc67ae60974-kube-api-access-sbzsd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.250896 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-btn45"] Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.936840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerStarted","Data":"3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2"} Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.941408 4886 generic.go:334] "Generic (PLEG): container finished" podID="8cabf586-398a-45a9-80d6-2fd63d9e14e5" containerID="d6960d602147a760f370e0aaeba322f8c53999b050075e5ef6c33ecafc0b7928" exitCode=0 Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.941500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqcf4" event={"ID":"8cabf586-398a-45a9-80d6-2fd63d9e14e5","Type":"ContainerDied","Data":"d6960d602147a760f370e0aaeba322f8c53999b050075e5ef6c33ecafc0b7928"} Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.947635 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-60d5-account-create-update-w67hv" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.947663 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6zh6p" Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.947712 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-6zh6p" event={"ID":"323a490d-33e2-4411-8a77-c578f409ba28","Type":"ContainerDied","Data":"e0ac0de75a6d66b5b0eab6f8b648695440128eefa4a612dc2e8eeb54837d3d6c"} Jan 29 17:09:25 crc kubenswrapper[4886]: I0129 17:09:25.947737 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ac0de75a6d66b5b0eab6f8b648695440128eefa4a612dc2e8eeb54837d3d6c" Jan 29 17:09:26 crc kubenswrapper[4886]: I0129 17:09:26.629927 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" path="/var/lib/kubelet/pods/da76d93d-7c2d-485e-b5e0-229f4254d74b/volumes" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.443690 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.609100 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhvmq\" (UniqueName: \"kubernetes.io/projected/8cabf586-398a-45a9-80d6-2fd63d9e14e5-kube-api-access-vhvmq\") pod \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.609267 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-combined-ca-bundle\") pod \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.609472 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-scripts\") pod \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.609596 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-config-data\") pod \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\" (UID: \"8cabf586-398a-45a9-80d6-2fd63d9e14e5\") " Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.626076 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cabf586-398a-45a9-80d6-2fd63d9e14e5-kube-api-access-vhvmq" (OuterVolumeSpecName: "kube-api-access-vhvmq") pod "8cabf586-398a-45a9-80d6-2fd63d9e14e5" (UID: "8cabf586-398a-45a9-80d6-2fd63d9e14e5"). InnerVolumeSpecName "kube-api-access-vhvmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.632559 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-scripts" (OuterVolumeSpecName: "scripts") pod "8cabf586-398a-45a9-80d6-2fd63d9e14e5" (UID: "8cabf586-398a-45a9-80d6-2fd63d9e14e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.668532 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-config-data" (OuterVolumeSpecName: "config-data") pod "8cabf586-398a-45a9-80d6-2fd63d9e14e5" (UID: "8cabf586-398a-45a9-80d6-2fd63d9e14e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.676510 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cabf586-398a-45a9-80d6-2fd63d9e14e5" (UID: "8cabf586-398a-45a9-80d6-2fd63d9e14e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.712772 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.712815 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.712837 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cabf586-398a-45a9-80d6-2fd63d9e14e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.712850 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhvmq\" (UniqueName: \"kubernetes.io/projected/8cabf586-398a-45a9-80d6-2fd63d9e14e5-kube-api-access-vhvmq\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.972309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerStarted","Data":"63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef"} Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.972611 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.975554 4886 generic.go:334] "Generic (PLEG): container finished" podID="a88a08b7-d54a-4414-b7f6-b490949d6b70" containerID="b0c7be4a8a6f220b0bc62ecd7ce7d07cb8b17e5644962c70a9a466af1717c6ce" exitCode=0 Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.975642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fznz7" event={"ID":"a88a08b7-d54a-4414-b7f6-b490949d6b70","Type":"ContainerDied","Data":"b0c7be4a8a6f220b0bc62ecd7ce7d07cb8b17e5644962c70a9a466af1717c6ce"} Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.978964 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tqcf4" event={"ID":"8cabf586-398a-45a9-80d6-2fd63d9e14e5","Type":"ContainerDied","Data":"c9ea59738c6ba35a7c3d3e2f05ce7750bd7b76ba456616dc38cec147840a905e"} Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.979001 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9ea59738c6ba35a7c3d3e2f05ce7750bd7b76ba456616dc38cec147840a905e" Jan 29 17:09:27 crc kubenswrapper[4886]: I0129 17:09:27.979060 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tqcf4" Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.004673 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.217312215 podStartE2EDuration="8.004649476s" podCreationTimestamp="2026-01-29 17:09:20 +0000 UTC" firstStartedPulling="2026-01-29 17:09:22.179245188 +0000 UTC m=+2845.087964460" lastFinishedPulling="2026-01-29 17:09:26.966582449 +0000 UTC m=+2849.875301721" observedRunningTime="2026-01-29 17:09:27.992167079 +0000 UTC m=+2850.900886381" watchObservedRunningTime="2026-01-29 17:09:28.004649476 +0000 UTC m=+2850.913368758" Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.179428 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.179690 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-log" containerID="cri-o://b24f4f5a92565d88d3fd3da1badf8b5f1cb84c27bbc9afb1415ec3f58dd94565" gracePeriod=30 Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.179826 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-api" containerID="cri-o://9ac610ed30cb05a5e2e84f376b3dae669cc45f85e6a0aacf8442be252f9695ce" gracePeriod=30 Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.216049 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.216580 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3441bcd4-bf8b-406f-b3f5-1c723908bdc4" containerName="nova-scheduler-scheduler" containerID="cri-o://8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd" gracePeriod=30 Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.990789 4886 generic.go:334] "Generic (PLEG): container finished" podID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerID="b24f4f5a92565d88d3fd3da1badf8b5f1cb84c27bbc9afb1415ec3f58dd94565" exitCode=143 Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.991009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24e1f4d-2c34-4496-bd90-4fe840552491","Type":"ContainerDied","Data":"b24f4f5a92565d88d3fd3da1badf8b5f1cb84c27bbc9afb1415ec3f58dd94565"} Jan 29 17:09:28 crc kubenswrapper[4886]: I0129 17:09:28.997715 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7756b9d78c-btn45" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.227:5353: i/o timeout" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.468812 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.654525 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78gf\" (UniqueName: \"kubernetes.io/projected/a88a08b7-d54a-4414-b7f6-b490949d6b70-kube-api-access-n78gf\") pod \"a88a08b7-d54a-4414-b7f6-b490949d6b70\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.654704 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-config-data\") pod \"a88a08b7-d54a-4414-b7f6-b490949d6b70\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.655554 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-combined-ca-bundle\") pod \"a88a08b7-d54a-4414-b7f6-b490949d6b70\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.655584 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-scripts\") pod \"a88a08b7-d54a-4414-b7f6-b490949d6b70\" (UID: \"a88a08b7-d54a-4414-b7f6-b490949d6b70\") " Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.661174 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88a08b7-d54a-4414-b7f6-b490949d6b70-kube-api-access-n78gf" (OuterVolumeSpecName: "kube-api-access-n78gf") pod "a88a08b7-d54a-4414-b7f6-b490949d6b70" (UID: "a88a08b7-d54a-4414-b7f6-b490949d6b70"). InnerVolumeSpecName "kube-api-access-n78gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.662514 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-scripts" (OuterVolumeSpecName: "scripts") pod "a88a08b7-d54a-4414-b7f6-b490949d6b70" (UID: "a88a08b7-d54a-4414-b7f6-b490949d6b70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.689071 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-config-data" (OuterVolumeSpecName: "config-data") pod "a88a08b7-d54a-4414-b7f6-b490949d6b70" (UID: "a88a08b7-d54a-4414-b7f6-b490949d6b70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.717537 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a88a08b7-d54a-4414-b7f6-b490949d6b70" (UID: "a88a08b7-d54a-4414-b7f6-b490949d6b70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.758933 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n78gf\" (UniqueName: \"kubernetes.io/projected/a88a08b7-d54a-4414-b7f6-b490949d6b70-kube-api-access-n78gf\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.758969 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.758979 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:29 crc kubenswrapper[4886]: I0129 17:09:29.758990 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a88a08b7-d54a-4414-b7f6-b490949d6b70-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.002803 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fznz7" event={"ID":"a88a08b7-d54a-4414-b7f6-b490949d6b70","Type":"ContainerDied","Data":"0f300c9b5b26753aaff19219c045a650f2a2a1dbd8aa16dd9736b14b2cbcde2c"} Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.003145 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f300c9b5b26753aaff19219c045a650f2a2a1dbd8aa16dd9736b14b2cbcde2c" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.002856 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fznz7" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122089 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:09:30 crc kubenswrapper[4886]: E0129 17:09:30.122700 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6f2462-b78d-4619-9704-5cc67ae60974" containerName="mariadb-account-create-update" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122720 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6f2462-b78d-4619-9704-5cc67ae60974" containerName="mariadb-account-create-update" Jan 29 17:09:30 crc kubenswrapper[4886]: E0129 17:09:30.122735 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a490d-33e2-4411-8a77-c578f409ba28" containerName="mariadb-database-create" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122744 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a490d-33e2-4411-8a77-c578f409ba28" containerName="mariadb-database-create" Jan 29 17:09:30 crc kubenswrapper[4886]: E0129 17:09:30.122761 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88a08b7-d54a-4414-b7f6-b490949d6b70" containerName="nova-cell1-conductor-db-sync" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122770 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88a08b7-d54a-4414-b7f6-b490949d6b70" containerName="nova-cell1-conductor-db-sync" Jan 29 17:09:30 crc kubenswrapper[4886]: E0129 17:09:30.122791 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="dnsmasq-dns" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122799 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="dnsmasq-dns" Jan 29 17:09:30 crc kubenswrapper[4886]: E0129 17:09:30.122814 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="init" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122822 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="init" Jan 29 17:09:30 crc kubenswrapper[4886]: E0129 17:09:30.122883 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cabf586-398a-45a9-80d6-2fd63d9e14e5" containerName="nova-manage" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.122893 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cabf586-398a-45a9-80d6-2fd63d9e14e5" containerName="nova-manage" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.123178 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cabf586-398a-45a9-80d6-2fd63d9e14e5" containerName="nova-manage" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.123203 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da76d93d-7c2d-485e-b5e0-229f4254d74b" containerName="dnsmasq-dns" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.123216 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="323a490d-33e2-4411-8a77-c578f409ba28" containerName="mariadb-database-create" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.123236 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6f2462-b78d-4619-9704-5cc67ae60974" containerName="mariadb-account-create-update" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.123255 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88a08b7-d54a-4414-b7f6-b490949d6b70" containerName="nova-cell1-conductor-db-sync" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.124291 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.133600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.135663 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.275685 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5cnc\" (UniqueName: \"kubernetes.io/projected/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-kube-api-access-n5cnc\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.275751 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.275773 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.378030 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5cnc\" (UniqueName: \"kubernetes.io/projected/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-kube-api-access-n5cnc\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.378099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.378121 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.396084 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.403154 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5cnc\" (UniqueName: \"kubernetes.io/projected/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-kube-api-access-n5cnc\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.417385 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08160d2e-8072-4d08-9dd2-4b5f256b6d9d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"08160d2e-8072-4d08-9dd2-4b5f256b6d9d\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.462306 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.584133 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.685379 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-config-data\") pod \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.685912 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-combined-ca-bundle\") pod \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.686351 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dblx2\" (UniqueName: \"kubernetes.io/projected/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-kube-api-access-dblx2\") pod \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\" (UID: \"3441bcd4-bf8b-406f-b3f5-1c723908bdc4\") " Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.690041 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-kube-api-access-dblx2" (OuterVolumeSpecName: "kube-api-access-dblx2") pod "3441bcd4-bf8b-406f-b3f5-1c723908bdc4" (UID: "3441bcd4-bf8b-406f-b3f5-1c723908bdc4"). InnerVolumeSpecName "kube-api-access-dblx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.743099 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3441bcd4-bf8b-406f-b3f5-1c723908bdc4" (UID: "3441bcd4-bf8b-406f-b3f5-1c723908bdc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.781235 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-config-data" (OuterVolumeSpecName: "config-data") pod "3441bcd4-bf8b-406f-b3f5-1c723908bdc4" (UID: "3441bcd4-bf8b-406f-b3f5-1c723908bdc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.802625 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.802971 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dblx2\" (UniqueName: \"kubernetes.io/projected/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-kube-api-access-dblx2\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:30 crc kubenswrapper[4886]: I0129 17:09:30.802988 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3441bcd4-bf8b-406f-b3f5-1c723908bdc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.017747 4886 generic.go:334] "Generic (PLEG): container finished" podID="3441bcd4-bf8b-406f-b3f5-1c723908bdc4" containerID="8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd" exitCode=0 Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.017799 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3441bcd4-bf8b-406f-b3f5-1c723908bdc4","Type":"ContainerDied","Data":"8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd"} Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.017835 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3441bcd4-bf8b-406f-b3f5-1c723908bdc4","Type":"ContainerDied","Data":"95891069401cb7e43c836c472c728a63f5e1133c6a2287df2be68780c76d5016"} Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.017852 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.017863 4886 scope.go:117] "RemoveContainer" containerID="8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.053154 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.057101 4886 scope.go:117] "RemoveContainer" containerID="8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd" Jan 29 17:09:31 crc kubenswrapper[4886]: E0129 17:09:31.058036 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd\": container with ID starting with 8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd not found: ID does not exist" containerID="8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.058070 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd"} err="failed to get container status \"8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd\": rpc error: code = NotFound desc = could not find container \"8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd\": container with ID starting with 8808eab58f9c8adf5605704cca70ec0bf454f6f62d9777e76ad457d3030718bd not found: ID does not exist" Jan 29 17:09:31 crc kubenswrapper[4886]: W0129 17:09:31.066699 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08160d2e_8072_4d08_9dd2_4b5f256b6d9d.slice/crio-4d3c24bf2c92e30e5d04905392db5a16900f98de2ca897fc14f080b8ecc389fe WatchSource:0}: Error finding container 4d3c24bf2c92e30e5d04905392db5a16900f98de2ca897fc14f080b8ecc389fe: Status 404 returned error can't find the container with id 4d3c24bf2c92e30e5d04905392db5a16900f98de2ca897fc14f080b8ecc389fe Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.067880 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.089073 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.100969 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:31 crc kubenswrapper[4886]: E0129 17:09:31.101716 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3441bcd4-bf8b-406f-b3f5-1c723908bdc4" containerName="nova-scheduler-scheduler" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.101744 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3441bcd4-bf8b-406f-b3f5-1c723908bdc4" containerName="nova-scheduler-scheduler" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.102034 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3441bcd4-bf8b-406f-b3f5-1c723908bdc4" containerName="nova-scheduler-scheduler" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.103216 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.107632 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.112435 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.212919 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-config-data\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.213057 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.213092 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wgk9\" (UniqueName: \"kubernetes.io/projected/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-kube-api-access-4wgk9\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.316697 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-config-data\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.316800 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.316823 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wgk9\" (UniqueName: \"kubernetes.io/projected/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-kube-api-access-4wgk9\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.335082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-config-data\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.338165 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wgk9\" (UniqueName: \"kubernetes.io/projected/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-kube-api-access-4wgk9\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.338513 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.428518 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4886]: I0129 17:09:31.958140 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.032498 4886 generic.go:334] "Generic (PLEG): container finished" podID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerID="9ac610ed30cb05a5e2e84f376b3dae669cc45f85e6a0aacf8442be252f9695ce" exitCode=0 Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.032583 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24e1f4d-2c34-4496-bd90-4fe840552491","Type":"ContainerDied","Data":"9ac610ed30cb05a5e2e84f376b3dae669cc45f85e6a0aacf8442be252f9695ce"} Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.037038 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"08160d2e-8072-4d08-9dd2-4b5f256b6d9d","Type":"ContainerStarted","Data":"1c0ce7463f78b041e9b29ab18be8908204e14cb1e5eea448e46f3eae3f631984"} Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.037070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"08160d2e-8072-4d08-9dd2-4b5f256b6d9d","Type":"ContainerStarted","Data":"4d3c24bf2c92e30e5d04905392db5a16900f98de2ca897fc14f080b8ecc389fe"} Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.037336 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.038353 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd8b58c7-942f-4f89-88a0-ce374fd98f0b","Type":"ContainerStarted","Data":"c2ea7d41eadeb9e0900ac95c53b4acc74be8017115cf4e43325000be7c90063b"} Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.063506 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.063488024 podStartE2EDuration="2.063488024s" podCreationTimestamp="2026-01-29 17:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:32.049231076 +0000 UTC m=+2854.957950348" watchObservedRunningTime="2026-01-29 17:09:32.063488024 +0000 UTC m=+2854.972207296" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.113597 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.235452 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-combined-ca-bundle\") pod \"c24e1f4d-2c34-4496-bd90-4fe840552491\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.235527 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sdhg\" (UniqueName: \"kubernetes.io/projected/c24e1f4d-2c34-4496-bd90-4fe840552491-kube-api-access-5sdhg\") pod \"c24e1f4d-2c34-4496-bd90-4fe840552491\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.235610 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-config-data\") pod \"c24e1f4d-2c34-4496-bd90-4fe840552491\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.235816 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24e1f4d-2c34-4496-bd90-4fe840552491-logs\") pod \"c24e1f4d-2c34-4496-bd90-4fe840552491\" (UID: \"c24e1f4d-2c34-4496-bd90-4fe840552491\") " Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.237237 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c24e1f4d-2c34-4496-bd90-4fe840552491-logs" (OuterVolumeSpecName: "logs") pod "c24e1f4d-2c34-4496-bd90-4fe840552491" (UID: "c24e1f4d-2c34-4496-bd90-4fe840552491"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.239172 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24e1f4d-2c34-4496-bd90-4fe840552491-kube-api-access-5sdhg" (OuterVolumeSpecName: "kube-api-access-5sdhg") pod "c24e1f4d-2c34-4496-bd90-4fe840552491" (UID: "c24e1f4d-2c34-4496-bd90-4fe840552491"). InnerVolumeSpecName "kube-api-access-5sdhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.274793 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c24e1f4d-2c34-4496-bd90-4fe840552491" (UID: "c24e1f4d-2c34-4496-bd90-4fe840552491"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.287562 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-config-data" (OuterVolumeSpecName: "config-data") pod "c24e1f4d-2c34-4496-bd90-4fe840552491" (UID: "c24e1f4d-2c34-4496-bd90-4fe840552491"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.339186 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.339547 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sdhg\" (UniqueName: \"kubernetes.io/projected/c24e1f4d-2c34-4496-bd90-4fe840552491-kube-api-access-5sdhg\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.339562 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24e1f4d-2c34-4496-bd90-4fe840552491-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.339734 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24e1f4d-2c34-4496-bd90-4fe840552491-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4886]: I0129 17:09:32.633039 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3441bcd4-bf8b-406f-b3f5-1c723908bdc4" path="/var/lib/kubelet/pods/3441bcd4-bf8b-406f-b3f5-1c723908bdc4/volumes" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.052405 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd8b58c7-942f-4f89-88a0-ce374fd98f0b","Type":"ContainerStarted","Data":"9734db9b6c351c8b935d8796b19514bcaecf82f2265e11ccf340fb3e8e4c7834"} Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.054540 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24e1f4d-2c34-4496-bd90-4fe840552491","Type":"ContainerDied","Data":"eb8a3baac4fbd0a80179f8a19f3f61fb9fca2e4d5dcfe096915c43ef69238e98"} Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.054568 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.054582 4886 scope.go:117] "RemoveContainer" containerID="9ac610ed30cb05a5e2e84f376b3dae669cc45f85e6a0aacf8442be252f9695ce" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.086877 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.08685442 podStartE2EDuration="2.08685442s" podCreationTimestamp="2026-01-29 17:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:33.07777601 +0000 UTC m=+2855.986495302" watchObservedRunningTime="2026-01-29 17:09:33.08685442 +0000 UTC m=+2855.995573692" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.095952 4886 scope.go:117] "RemoveContainer" containerID="b24f4f5a92565d88d3fd3da1badf8b5f1cb84c27bbc9afb1415ec3f58dd94565" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.109393 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.123407 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.136802 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:33 crc kubenswrapper[4886]: E0129 17:09:33.137266 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-log" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.137282 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-log" Jan 29 17:09:33 crc kubenswrapper[4886]: E0129 17:09:33.137345 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-api" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.137352 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-api" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.137539 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-log" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.137559 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" containerName="nova-api-api" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.144390 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.149898 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.178352 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.271123 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwd6\" (UniqueName: \"kubernetes.io/projected/8c6e91d6-fc51-499e-b78b-00e296eac00d-kube-api-access-5gwd6\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.271598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c6e91d6-fc51-499e-b78b-00e296eac00d-logs\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.271753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.272005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-config-data\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.373844 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.374012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-config-data\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.374113 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwd6\" (UniqueName: \"kubernetes.io/projected/8c6e91d6-fc51-499e-b78b-00e296eac00d-kube-api-access-5gwd6\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.374160 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c6e91d6-fc51-499e-b78b-00e296eac00d-logs\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.374656 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c6e91d6-fc51-499e-b78b-00e296eac00d-logs\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.381080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.382066 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-config-data\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.400661 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwd6\" (UniqueName: \"kubernetes.io/projected/8c6e91d6-fc51-499e-b78b-00e296eac00d-kube-api-access-5gwd6\") pod \"nova-api-0\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.480213 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:09:33 crc kubenswrapper[4886]: I0129 17:09:33.981423 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:34 crc kubenswrapper[4886]: I0129 17:09:34.068395 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c6e91d6-fc51-499e-b78b-00e296eac00d","Type":"ContainerStarted","Data":"2e00cbff980509a81df06975ce0505dd9daf5a8bd0d230ec6e3bf51d83a43450"} Jan 29 17:09:34 crc kubenswrapper[4886]: I0129 17:09:34.633140 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24e1f4d-2c34-4496-bd90-4fe840552491" path="/var/lib/kubelet/pods/c24e1f4d-2c34-4496-bd90-4fe840552491/volumes" Jan 29 17:09:35 crc kubenswrapper[4886]: I0129 17:09:35.079858 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c6e91d6-fc51-499e-b78b-00e296eac00d","Type":"ContainerStarted","Data":"f7c0f51e04a1da68994cf51db97c7c851cff30a285cc4a371f750594853805ae"} Jan 29 17:09:35 crc kubenswrapper[4886]: I0129 17:09:35.080209 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c6e91d6-fc51-499e-b78b-00e296eac00d","Type":"ContainerStarted","Data":"b095c2996e7ff38f4d839b7c99b3243d8facce91df007a86d00bced397c851ce"} Jan 29 17:09:35 crc kubenswrapper[4886]: I0129 17:09:35.107232 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.10721183 podStartE2EDuration="2.10721183s" podCreationTimestamp="2026-01-29 17:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:35.10197172 +0000 UTC m=+2858.010691002" watchObservedRunningTime="2026-01-29 17:09:35.10721183 +0000 UTC m=+2858.015931102" Jan 29 17:09:36 crc kubenswrapper[4886]: I0129 17:09:36.429640 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 17:09:40 crc kubenswrapper[4886]: I0129 17:09:40.491155 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 17:09:41 crc kubenswrapper[4886]: I0129 17:09:41.429744 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 17:09:41 crc kubenswrapper[4886]: I0129 17:09:41.462973 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 17:09:42 crc kubenswrapper[4886]: I0129 17:09:42.225566 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 17:09:43 crc kubenswrapper[4886]: I0129 17:09:43.481199 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:09:43 crc kubenswrapper[4886]: I0129 17:09:43.481540 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:09:44 crc kubenswrapper[4886]: I0129 17:09:44.522538 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 17:09:44 crc kubenswrapper[4886]: I0129 17:09:44.563562 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.200736 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.320349 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-combined-ca-bundle\") pod \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.320526 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmffz\" (UniqueName: \"kubernetes.io/projected/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-kube-api-access-rmffz\") pod \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.320578 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-config-data\") pod \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\" (UID: \"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.325727 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-kube-api-access-rmffz" (OuterVolumeSpecName: "kube-api-access-rmffz") pod "cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" (UID: "cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11"). InnerVolumeSpecName "kube-api-access-rmffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.335116 4886 generic.go:334] "Generic (PLEG): container finished" podID="63670887-1250-42df-a728-315414be9901" containerID="2706075df7ed398bfa86a5019c0c0b891534965545aed4044f6858df83babfa9" exitCode=137 Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.335196 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63670887-1250-42df-a728-315414be9901","Type":"ContainerDied","Data":"2706075df7ed398bfa86a5019c0c0b891534965545aed4044f6858df83babfa9"} Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.335277 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"63670887-1250-42df-a728-315414be9901","Type":"ContainerDied","Data":"54233804a9ed5dc337d2e33b8c617c4a33e85a8e6af923aaf251e6cf9186b374"} Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.335291 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54233804a9ed5dc337d2e33b8c617c4a33e85a8e6af923aaf251e6cf9186b374" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.337998 4886 generic.go:334] "Generic (PLEG): container finished" podID="cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" containerID="c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130" exitCode=137 Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.338030 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11","Type":"ContainerDied","Data":"c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130"} Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.338063 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11","Type":"ContainerDied","Data":"b9417b27c0621c2b043b290e7d29fbfb8ed923b29824c45f4941d5924a3fcf00"} Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.338086 4886 scope.go:117] "RemoveContainer" containerID="c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.338102 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.345345 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.353100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" (UID: "cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.357119 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-config-data" (OuterVolumeSpecName: "config-data") pod "cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" (UID: "cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.366680 4886 scope.go:117] "RemoveContainer" containerID="c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130" Jan 29 17:09:51 crc kubenswrapper[4886]: E0129 17:09:51.375535 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130\": container with ID starting with c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130 not found: ID does not exist" containerID="c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.375587 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130"} err="failed to get container status \"c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130\": rpc error: code = NotFound desc = could not find container \"c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130\": container with ID starting with c1835e2ae50e04a7c3dfeb3c6fd089c66709163b5092c57a8393b86cc24e0130 not found: ID does not exist" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.418427 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.421999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-config-data\") pod \"63670887-1250-42df-a728-315414be9901\" (UID: \"63670887-1250-42df-a728-315414be9901\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.422119 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63670887-1250-42df-a728-315414be9901-logs\") pod \"63670887-1250-42df-a728-315414be9901\" (UID: \"63670887-1250-42df-a728-315414be9901\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.422220 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqq5\" (UniqueName: \"kubernetes.io/projected/63670887-1250-42df-a728-315414be9901-kube-api-access-frqq5\") pod \"63670887-1250-42df-a728-315414be9901\" (UID: \"63670887-1250-42df-a728-315414be9901\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.422241 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-combined-ca-bundle\") pod \"63670887-1250-42df-a728-315414be9901\" (UID: \"63670887-1250-42df-a728-315414be9901\") " Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.422506 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63670887-1250-42df-a728-315414be9901-logs" (OuterVolumeSpecName: "logs") pod "63670887-1250-42df-a728-315414be9901" (UID: "63670887-1250-42df-a728-315414be9901"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.423023 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63670887-1250-42df-a728-315414be9901-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.423047 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.423060 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmffz\" (UniqueName: \"kubernetes.io/projected/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-kube-api-access-rmffz\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.423073 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.426706 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63670887-1250-42df-a728-315414be9901-kube-api-access-frqq5" (OuterVolumeSpecName: "kube-api-access-frqq5") pod "63670887-1250-42df-a728-315414be9901" (UID: "63670887-1250-42df-a728-315414be9901"). InnerVolumeSpecName "kube-api-access-frqq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.455018 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63670887-1250-42df-a728-315414be9901" (UID: "63670887-1250-42df-a728-315414be9901"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.476442 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-config-data" (OuterVolumeSpecName: "config-data") pod "63670887-1250-42df-a728-315414be9901" (UID: "63670887-1250-42df-a728-315414be9901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.525838 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqq5\" (UniqueName: \"kubernetes.io/projected/63670887-1250-42df-a728-315414be9901-kube-api-access-frqq5\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.525871 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.525881 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63670887-1250-42df-a728-315414be9901-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.679496 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.692931 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.720217 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:51 crc kubenswrapper[4886]: E0129 17:09:51.720808 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-log" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.720832 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-log" Jan 29 17:09:51 crc kubenswrapper[4886]: E0129 17:09:51.720862 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.720869 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 17:09:51 crc kubenswrapper[4886]: E0129 17:09:51.720888 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-metadata" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.720895 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-metadata" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.721148 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-log" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.721159 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.721174 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="63670887-1250-42df-a728-315414be9901" containerName="nova-metadata-metadata" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.722001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.728725 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.728842 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.729003 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.736250 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.831725 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.832081 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.832113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.832144 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwsr\" (UniqueName: \"kubernetes.io/projected/c2249ae5-133d-4750-9d7a-529dc8c9b39a-kube-api-access-jqwsr\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.832173 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.934271 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.934784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.934967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.935131 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqwsr\" (UniqueName: \"kubernetes.io/projected/c2249ae5-133d-4750-9d7a-529dc8c9b39a-kube-api-access-jqwsr\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.935314 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.941676 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.942490 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.944480 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.945108 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2249ae5-133d-4750-9d7a-529dc8c9b39a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:51 crc kubenswrapper[4886]: I0129 17:09:51.963902 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqwsr\" (UniqueName: \"kubernetes.io/projected/c2249ae5-133d-4750-9d7a-529dc8c9b39a-kube-api-access-jqwsr\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2249ae5-133d-4750-9d7a-529dc8c9b39a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.047852 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.352197 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.409677 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.432505 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.443999 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.447340 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.450582 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.451423 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.470257 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.567345 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-config-data\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.567427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-logs\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.567462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8p2\" (UniqueName: \"kubernetes.io/projected/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-kube-api-access-dr8p2\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.567487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.567904 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.570549 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:09:52 crc kubenswrapper[4886]: W0129 17:09:52.572822 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2249ae5_133d_4750_9d7a_529dc8c9b39a.slice/crio-6754113d5c90815a693b7e3bf97f1354a3d88b39a82b7947f77ce2319b1548f0 WatchSource:0}: Error finding container 6754113d5c90815a693b7e3bf97f1354a3d88b39a82b7947f77ce2319b1548f0: Status 404 returned error can't find the container with id 6754113d5c90815a693b7e3bf97f1354a3d88b39a82b7947f77ce2319b1548f0 Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.631573 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63670887-1250-42df-a728-315414be9901" path="/var/lib/kubelet/pods/63670887-1250-42df-a728-315414be9901/volumes" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.633167 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11" path="/var/lib/kubelet/pods/cb5b14f4-92b2-4f90-bfb8-1d00ab4c7e11/volumes" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.670891 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-config-data\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.671689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-logs\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.671738 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8p2\" (UniqueName: \"kubernetes.io/projected/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-kube-api-access-dr8p2\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.671767 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.671899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.673020 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-logs\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.678209 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.679539 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-config-data\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.679996 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.690530 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8p2\" (UniqueName: \"kubernetes.io/projected/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-kube-api-access-dr8p2\") pod \"nova-metadata-0\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " pod="openstack/nova-metadata-0" Jan 29 17:09:52 crc kubenswrapper[4886]: I0129 17:09:52.772617 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:09:53 crc kubenswrapper[4886]: W0129 17:09:53.285015 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba13f7f_cb9d_4147_9f9d_982bd5daac77.slice/crio-590686b9473f5c18e61b69cef7feee9a7b36c136560c55bdbbed141a70bc112d WatchSource:0}: Error finding container 590686b9473f5c18e61b69cef7feee9a7b36c136560c55bdbbed141a70bc112d: Status 404 returned error can't find the container with id 590686b9473f5c18e61b69cef7feee9a7b36c136560c55bdbbed141a70bc112d Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.287139 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.366115 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba13f7f-cb9d-4147-9f9d-982bd5daac77","Type":"ContainerStarted","Data":"590686b9473f5c18e61b69cef7feee9a7b36c136560c55bdbbed141a70bc112d"} Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.367705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2249ae5-133d-4750-9d7a-529dc8c9b39a","Type":"ContainerStarted","Data":"fc9c7d986999e5ce62132e547e3e1eb4f54671ebbd953c356d465c7357a314a3"} Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.367754 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2249ae5-133d-4750-9d7a-529dc8c9b39a","Type":"ContainerStarted","Data":"6754113d5c90815a693b7e3bf97f1354a3d88b39a82b7947f77ce2319b1548f0"} Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.394051 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.394032638 podStartE2EDuration="2.394032638s" podCreationTimestamp="2026-01-29 17:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:53.391928848 +0000 UTC m=+2876.300648130" watchObservedRunningTime="2026-01-29 17:09:53.394032638 +0000 UTC m=+2876.302751910" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.488178 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.490841 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.494543 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.494916 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.689395 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b7z4z"] Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.692075 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.697407 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7z4z"] Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.801983 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-utilities\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.802747 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxvc\" (UniqueName: \"kubernetes.io/projected/265d5adc-ace5-4008-99d5-206b5182e6d4-kube-api-access-xkxvc\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.802848 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-catalog-content\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.904374 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-utilities\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.904498 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxvc\" (UniqueName: \"kubernetes.io/projected/265d5adc-ace5-4008-99d5-206b5182e6d4-kube-api-access-xkxvc\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.904550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-catalog-content\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.904841 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-catalog-content\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.904896 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-utilities\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:53 crc kubenswrapper[4886]: I0129 17:09:53.934701 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxvc\" (UniqueName: \"kubernetes.io/projected/265d5adc-ace5-4008-99d5-206b5182e6d4-kube-api-access-xkxvc\") pod \"community-operators-b7z4z\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.047005 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.384474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba13f7f-cb9d-4147-9f9d-982bd5daac77","Type":"ContainerStarted","Data":"cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9"} Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.384909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba13f7f-cb9d-4147-9f9d-982bd5daac77","Type":"ContainerStarted","Data":"5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc"} Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.386263 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.389868 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.412468 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.412444583 podStartE2EDuration="2.412444583s" podCreationTimestamp="2026-01-29 17:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:54.411449404 +0000 UTC m=+2877.320168666" watchObservedRunningTime="2026-01-29 17:09:54.412444583 +0000 UTC m=+2877.321163855" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.650688 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7z4z"] Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.712365 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fh86h"] Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.715203 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.745345 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fh86h"] Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.846717 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.846761 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.846782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.846935 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.847001 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.847024 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpq8\" (UniqueName: \"kubernetes.io/projected/efe27968-ef82-463a-8852-222528e7980d-kube-api-access-bcpq8\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.949738 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.950186 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.950210 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcpq8\" (UniqueName: \"kubernetes.io/projected/efe27968-ef82-463a-8852-222528e7980d-kube-api-access-bcpq8\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.950253 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.950269 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.950290 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.951011 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.951096 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.952569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.953625 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.956476 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efe27968-ef82-463a-8852-222528e7980d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:54 crc kubenswrapper[4886]: I0129 17:09:54.987972 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcpq8\" (UniqueName: \"kubernetes.io/projected/efe27968-ef82-463a-8852-222528e7980d-kube-api-access-bcpq8\") pod \"dnsmasq-dns-6b7bbf7cf9-fh86h\" (UID: \"efe27968-ef82-463a-8852-222528e7980d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:55 crc kubenswrapper[4886]: I0129 17:09:55.071739 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:55 crc kubenswrapper[4886]: I0129 17:09:55.399030 4886 generic.go:334] "Generic (PLEG): container finished" podID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerID="c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368" exitCode=0 Jan 29 17:09:55 crc kubenswrapper[4886]: I0129 17:09:55.399789 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerDied","Data":"c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368"} Jan 29 17:09:55 crc kubenswrapper[4886]: I0129 17:09:55.399845 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerStarted","Data":"b49a773367da81a381e19a2ba4ecf2f2565cbe6beacc718a457751390e647a71"} Jan 29 17:09:55 crc kubenswrapper[4886]: I0129 17:09:55.401837 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:09:55 crc kubenswrapper[4886]: I0129 17:09:55.626110 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fh86h"] Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.410811 4886 generic.go:334] "Generic (PLEG): container finished" podID="efe27968-ef82-463a-8852-222528e7980d" containerID="8e8f92d48ecc2d99355334d6891f6a7a18b5bf8604dbd8b2719327472baa935c" exitCode=0 Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.410895 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" event={"ID":"efe27968-ef82-463a-8852-222528e7980d","Type":"ContainerDied","Data":"8e8f92d48ecc2d99355334d6891f6a7a18b5bf8604dbd8b2719327472baa935c"} Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.411195 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" event={"ID":"efe27968-ef82-463a-8852-222528e7980d","Type":"ContainerStarted","Data":"5c71118f414dec8188ace8063b50692f92c8e5698781b6464b0323ed841eca32"} Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.415250 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerStarted","Data":"3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173"} Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.630543 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.631029 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="proxy-httpd" containerID="cri-o://63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef" gracePeriod=30 Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.631108 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="sg-core" containerID="cri-o://3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2" gracePeriod=30 Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.631028 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-central-agent" containerID="cri-o://0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707" gracePeriod=30 Jan 29 17:09:56 crc kubenswrapper[4886]: I0129 17:09:56.631195 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-notification-agent" containerID="cri-o://35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529" gracePeriod=30 Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.048716 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.431061 4886 generic.go:334] "Generic (PLEG): container finished" podID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerID="63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef" exitCode=0 Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.431097 4886 generic.go:334] "Generic (PLEG): container finished" podID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerID="3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2" exitCode=2 Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.431106 4886 generic.go:334] "Generic (PLEG): container finished" podID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerID="0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707" exitCode=0 Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.431135 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerDied","Data":"63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef"} Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.431189 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerDied","Data":"3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2"} Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.431202 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerDied","Data":"0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707"} Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.433432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" event={"ID":"efe27968-ef82-463a-8852-222528e7980d","Type":"ContainerStarted","Data":"1e10af47f9cb65f41c613b3888f9ea857bb52e7733a459e738c1fe3fa046d41a"} Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.433995 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.456011 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" podStartSLOduration=3.455985915 podStartE2EDuration="3.455985915s" podCreationTimestamp="2026-01-29 17:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:57.451852686 +0000 UTC m=+2880.360571958" watchObservedRunningTime="2026-01-29 17:09:57.455985915 +0000 UTC m=+2880.364705187" Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.655455 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.656097 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-log" containerID="cri-o://b095c2996e7ff38f4d839b7c99b3243d8facce91df007a86d00bced397c851ce" gracePeriod=30 Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.656162 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-api" containerID="cri-o://f7c0f51e04a1da68994cf51db97c7c851cff30a285cc4a371f750594853805ae" gracePeriod=30 Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.773462 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:09:57 crc kubenswrapper[4886]: I0129 17:09:57.773929 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.107982 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.250241 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-log-httpd\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.250898 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-combined-ca-bundle\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.251154 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-sg-core-conf-yaml\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.251160 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.251246 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjk8j\" (UniqueName: \"kubernetes.io/projected/295921c4-07ca-4972-a4fa-0a64f46855ec-kube-api-access-wjk8j\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.251397 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-run-httpd\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.251463 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-config-data\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.251534 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-scripts\") pod \"295921c4-07ca-4972-a4fa-0a64f46855ec\" (UID: \"295921c4-07ca-4972-a4fa-0a64f46855ec\") " Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.252244 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.253194 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.253234 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/295921c4-07ca-4972-a4fa-0a64f46855ec-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.257988 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295921c4-07ca-4972-a4fa-0a64f46855ec-kube-api-access-wjk8j" (OuterVolumeSpecName: "kube-api-access-wjk8j") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "kube-api-access-wjk8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.263509 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-scripts" (OuterVolumeSpecName: "scripts") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.308452 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.355860 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.355897 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.355915 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjk8j\" (UniqueName: \"kubernetes.io/projected/295921c4-07ca-4972-a4fa-0a64f46855ec-kube-api-access-wjk8j\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.363611 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.393409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-config-data" (OuterVolumeSpecName: "config-data") pod "295921c4-07ca-4972-a4fa-0a64f46855ec" (UID: "295921c4-07ca-4972-a4fa-0a64f46855ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.448292 4886 generic.go:334] "Generic (PLEG): container finished" podID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerID="35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529" exitCode=0 Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.448382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerDied","Data":"35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529"} Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.448400 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.448425 4886 scope.go:117] "RemoveContainer" containerID="63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.448413 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"295921c4-07ca-4972-a4fa-0a64f46855ec","Type":"ContainerDied","Data":"a53c80ed86f57307186bc127fbed1c995aed2de96e312e93825a7c90882f5022"} Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.452581 4886 generic.go:334] "Generic (PLEG): container finished" podID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerID="3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173" exitCode=0 Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.452653 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerDied","Data":"3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173"} Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.458275 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.458304 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295921c4-07ca-4972-a4fa-0a64f46855ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.458486 4886 generic.go:334] "Generic (PLEG): container finished" podID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerID="b095c2996e7ff38f4d839b7c99b3243d8facce91df007a86d00bced397c851ce" exitCode=143 Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.458547 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c6e91d6-fc51-499e-b78b-00e296eac00d","Type":"ContainerDied","Data":"b095c2996e7ff38f4d839b7c99b3243d8facce91df007a86d00bced397c851ce"} Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.488117 4886 scope.go:117] "RemoveContainer" containerID="3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.529319 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.529768 4886 scope.go:117] "RemoveContainer" containerID="35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.546244 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.577905 4886 scope.go:117] "RemoveContainer" containerID="0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.579072 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.579946 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-notification-agent" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.579965 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-notification-agent" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.580004 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="proxy-httpd" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.580011 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="proxy-httpd" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.580038 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-central-agent" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.580044 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-central-agent" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.580059 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="sg-core" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.580066 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="sg-core" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.586720 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-central-agent" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.586765 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="sg-core" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.586799 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="ceilometer-notification-agent" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.586817 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" containerName="proxy-httpd" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.600657 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.603703 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.607366 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.624805 4886 scope.go:117] "RemoveContainer" containerID="63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.625935 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef\": container with ID starting with 63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef not found: ID does not exist" containerID="63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.625975 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef"} err="failed to get container status \"63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef\": rpc error: code = NotFound desc = could not find container \"63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef\": container with ID starting with 63a6dbf76c0560d2045aa913e46fcd8eb27522f3a2df8c23f4d345a42f6982ef not found: ID does not exist" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.626002 4886 scope.go:117] "RemoveContainer" containerID="3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.626460 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2\": container with ID starting with 3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2 not found: ID does not exist" containerID="3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.626537 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2"} err="failed to get container status \"3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2\": rpc error: code = NotFound desc = could not find container \"3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2\": container with ID starting with 3856ce84dbdc829026cdc077123a144ae1db22ed2ef5daec2a2a38e79ea5fff2 not found: ID does not exist" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.626572 4886 scope.go:117] "RemoveContainer" containerID="35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.626990 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529\": container with ID starting with 35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529 not found: ID does not exist" containerID="35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.627019 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529"} err="failed to get container status \"35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529\": rpc error: code = NotFound desc = could not find container \"35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529\": container with ID starting with 35e24ed99f8fd2890904f1ca37992a754b300543953f2f3061639a8631f92529 not found: ID does not exist" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.627036 4886 scope.go:117] "RemoveContainer" containerID="0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707" Jan 29 17:09:58 crc kubenswrapper[4886]: E0129 17:09:58.629784 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707\": container with ID starting with 0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707 not found: ID does not exist" containerID="0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.629820 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707"} err="failed to get container status \"0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707\": rpc error: code = NotFound desc = could not find container \"0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707\": container with ID starting with 0b0960c021f6fe492666e7a5f8550203f34c505c88a04448efdf009572fba707 not found: ID does not exist" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.647447 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295921c4-07ca-4972-a4fa-0a64f46855ec" path="/var/lib/kubelet/pods/295921c4-07ca-4972-a4fa-0a64f46855ec/volumes" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.648218 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771114 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771468 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-run-httpd\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771531 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-scripts\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771618 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-config-data\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771671 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-log-httpd\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.771767 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsqj\" (UniqueName: \"kubernetes.io/projected/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-kube-api-access-fcsqj\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873532 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-config-data\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-log-httpd\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873707 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcsqj\" (UniqueName: \"kubernetes.io/projected/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-kube-api-access-fcsqj\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873769 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-run-httpd\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.873835 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-scripts\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.874159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-log-httpd\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.875038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-run-httpd\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.879044 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.879122 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-scripts\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.879307 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.879740 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-config-data\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.890658 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcsqj\" (UniqueName: \"kubernetes.io/projected/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-kube-api-access-fcsqj\") pod \"ceilometer-0\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " pod="openstack/ceilometer-0" Jan 29 17:09:58 crc kubenswrapper[4886]: I0129 17:09:58.930979 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:59 crc kubenswrapper[4886]: I0129 17:09:59.231883 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:59 crc kubenswrapper[4886]: I0129 17:09:59.528943 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:00 crc kubenswrapper[4886]: I0129 17:10:00.487121 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerStarted","Data":"4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede"} Jan 29 17:10:00 crc kubenswrapper[4886]: I0129 17:10:00.490088 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerStarted","Data":"72d7fa6925704b9669a07a61d5a64685973e8bd1e0037e203f9d28200da940d5"} Jan 29 17:10:00 crc kubenswrapper[4886]: I0129 17:10:00.531841 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b7z4z" podStartSLOduration=4.043282847 podStartE2EDuration="7.531816239s" podCreationTimestamp="2026-01-29 17:09:53 +0000 UTC" firstStartedPulling="2026-01-29 17:09:55.401533259 +0000 UTC m=+2878.310252531" lastFinishedPulling="2026-01-29 17:09:58.890066651 +0000 UTC m=+2881.798785923" observedRunningTime="2026-01-29 17:10:00.510964543 +0000 UTC m=+2883.419683915" watchObservedRunningTime="2026-01-29 17:10:00.531816239 +0000 UTC m=+2883.440535551" Jan 29 17:10:01 crc kubenswrapper[4886]: I0129 17:10:01.514494 4886 generic.go:334] "Generic (PLEG): container finished" podID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerID="f7c0f51e04a1da68994cf51db97c7c851cff30a285cc4a371f750594853805ae" exitCode=0 Jan 29 17:10:01 crc kubenswrapper[4886]: I0129 17:10:01.514566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c6e91d6-fc51-499e-b78b-00e296eac00d","Type":"ContainerDied","Data":"f7c0f51e04a1da68994cf51db97c7c851cff30a285cc4a371f750594853805ae"} Jan 29 17:10:01 crc kubenswrapper[4886]: I0129 17:10:01.927481 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.048759 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.059742 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-combined-ca-bundle\") pod \"8c6e91d6-fc51-499e-b78b-00e296eac00d\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.059795 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c6e91d6-fc51-499e-b78b-00e296eac00d-logs\") pod \"8c6e91d6-fc51-499e-b78b-00e296eac00d\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.060558 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-config-data\") pod \"8c6e91d6-fc51-499e-b78b-00e296eac00d\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.060604 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gwd6\" (UniqueName: \"kubernetes.io/projected/8c6e91d6-fc51-499e-b78b-00e296eac00d-kube-api-access-5gwd6\") pod \"8c6e91d6-fc51-499e-b78b-00e296eac00d\" (UID: \"8c6e91d6-fc51-499e-b78b-00e296eac00d\") " Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.060682 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6e91d6-fc51-499e-b78b-00e296eac00d-logs" (OuterVolumeSpecName: "logs") pod "8c6e91d6-fc51-499e-b78b-00e296eac00d" (UID: "8c6e91d6-fc51-499e-b78b-00e296eac00d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.061854 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c6e91d6-fc51-499e-b78b-00e296eac00d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.065722 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6e91d6-fc51-499e-b78b-00e296eac00d-kube-api-access-5gwd6" (OuterVolumeSpecName: "kube-api-access-5gwd6") pod "8c6e91d6-fc51-499e-b78b-00e296eac00d" (UID: "8c6e91d6-fc51-499e-b78b-00e296eac00d"). InnerVolumeSpecName "kube-api-access-5gwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.104069 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.107607 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-config-data" (OuterVolumeSpecName: "config-data") pod "8c6e91d6-fc51-499e-b78b-00e296eac00d" (UID: "8c6e91d6-fc51-499e-b78b-00e296eac00d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.110124 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c6e91d6-fc51-499e-b78b-00e296eac00d" (UID: "8c6e91d6-fc51-499e-b78b-00e296eac00d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.164155 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.164198 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gwd6\" (UniqueName: \"kubernetes.io/projected/8c6e91d6-fc51-499e-b78b-00e296eac00d-kube-api-access-5gwd6\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.164211 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c6e91d6-fc51-499e-b78b-00e296eac00d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.536925 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c6e91d6-fc51-499e-b78b-00e296eac00d","Type":"ContainerDied","Data":"2e00cbff980509a81df06975ce0505dd9daf5a8bd0d230ec6e3bf51d83a43450"} Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.536990 4886 scope.go:117] "RemoveContainer" containerID="f7c0f51e04a1da68994cf51db97c7c851cff30a285cc4a371f750594853805ae" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.537120 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.543740 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerStarted","Data":"91baaab9d9528ba788b818d15e20639e4d6e2fffc89317503dfc698ecdb0a06c"} Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.572527 4886 scope.go:117] "RemoveContainer" containerID="b095c2996e7ff38f4d839b7c99b3243d8facce91df007a86d00bced397c851ce" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.591462 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.653194 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.653242 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.653265 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:02 crc kubenswrapper[4886]: E0129 17:10:02.657158 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-log" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.657184 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-log" Jan 29 17:10:02 crc kubenswrapper[4886]: E0129 17:10:02.657211 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-api" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.657219 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-api" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.657440 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-log" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.657458 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" containerName="nova-api-api" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.671797 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.671927 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.675870 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.676096 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.679922 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.773606 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.773643 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.781613 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.781703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r59zt\" (UniqueName: \"kubernetes.io/projected/b515f59a-4b3a-4821-bbec-8e622a8164e6-kube-api-access-r59zt\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.781744 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-config-data\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.781780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.781810 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b515f59a-4b3a-4821-bbec-8e622a8164e6-logs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.781905 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.883435 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.883568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.883592 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r59zt\" (UniqueName: \"kubernetes.io/projected/b515f59a-4b3a-4821-bbec-8e622a8164e6-kube-api-access-r59zt\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.883651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-config-data\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.883690 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.883725 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b515f59a-4b3a-4821-bbec-8e622a8164e6-logs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.886491 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b515f59a-4b3a-4821-bbec-8e622a8164e6-logs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.891236 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.891709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.916901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.920069 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-config-data\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:02 crc kubenswrapper[4886]: I0129 17:10:02.922594 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r59zt\" (UniqueName: \"kubernetes.io/projected/b515f59a-4b3a-4821-bbec-8e622a8164e6-kube-api-access-r59zt\") pod \"nova-api-0\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " pod="openstack/nova-api-0" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.034447 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ddfqz"] Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.037054 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.056097 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.056463 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.077467 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.120202 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ddfqz"] Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.232735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxx4\" (UniqueName: \"kubernetes.io/projected/7a1c51cd-f91d-406b-815c-00879a9d6401-kube-api-access-xwxx4\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.232846 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.232918 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-scripts\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.233009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-config-data\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.334872 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-scripts\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.335049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-config-data\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.335173 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxx4\" (UniqueName: \"kubernetes.io/projected/7a1c51cd-f91d-406b-815c-00879a9d6401-kube-api-access-xwxx4\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.335221 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.345876 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-scripts\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.346184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.347026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-config-data\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.358180 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxx4\" (UniqueName: \"kubernetes.io/projected/7a1c51cd-f91d-406b-815c-00879a9d6401-kube-api-access-xwxx4\") pod \"nova-cell1-cell-mapping-ddfqz\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.432258 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.594847 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerStarted","Data":"d4c2814ebfa5456f9a32d52477ed9133aa03f6c310e426d0feadc41c2659a8a9"} Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.723635 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.800602 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:10:03 crc kubenswrapper[4886]: I0129 17:10:03.800758 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.050479 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.050521 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.088613 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ddfqz"] Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.663110 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6e91d6-fc51-499e-b78b-00e296eac00d" path="/var/lib/kubelet/pods/8c6e91d6-fc51-499e-b78b-00e296eac00d/volumes" Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.673952 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerStarted","Data":"189370fd8336eb715dd7e8e4fbb1c1dcacac0f2820ddab52e349e5fc03b6bbea"} Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.683104 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ddfqz" event={"ID":"7a1c51cd-f91d-406b-815c-00879a9d6401","Type":"ContainerStarted","Data":"5be86521758fe7c03f20fd8b758e10774f421701b95693128fa47b2a2e5adc70"} Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.683150 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ddfqz" event={"ID":"7a1c51cd-f91d-406b-815c-00879a9d6401","Type":"ContainerStarted","Data":"f1662f2f91761a984c86477b3a390f7b3bd8f222aea924e68ce2bb82b98bbf96"} Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.705855 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b515f59a-4b3a-4821-bbec-8e622a8164e6","Type":"ContainerStarted","Data":"297512a17905e8884ba2dee2e1bd0e97f5fbde7e67ab2e041189401e3a8b1069"} Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.705900 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b515f59a-4b3a-4821-bbec-8e622a8164e6","Type":"ContainerStarted","Data":"3b5aab9a83beedb9411f1928c81b699649b72f9a5c36a34dc864ad27dbc02c85"} Jan 29 17:10:04 crc kubenswrapper[4886]: I0129 17:10:04.718614 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ddfqz" podStartSLOduration=2.718596256 podStartE2EDuration="2.718596256s" podCreationTimestamp="2026-01-29 17:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:10:04.714815358 +0000 UTC m=+2887.623534630" watchObservedRunningTime="2026-01-29 17:10:04.718596256 +0000 UTC m=+2887.627315528" Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.073503 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fh86h" Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.125257 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b7z4z" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="registry-server" probeResult="failure" output=< Jan 29 17:10:05 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:10:05 crc kubenswrapper[4886]: > Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.176407 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zdbgk"] Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.176758 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerName="dnsmasq-dns" containerID="cri-o://18dccc69ea12ffd53b4d4c8e312d9e5ee415348aafbce21b941019b15077a6b6" gracePeriod=10 Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.735068 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b515f59a-4b3a-4821-bbec-8e622a8164e6","Type":"ContainerStarted","Data":"5279babaff011b0a7c0724784680ba960a9fce4465f977efe275f3b290d89fab"} Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.739592 4886 generic.go:334] "Generic (PLEG): container finished" podID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerID="18dccc69ea12ffd53b4d4c8e312d9e5ee415348aafbce21b941019b15077a6b6" exitCode=0 Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.739659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" event={"ID":"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1","Type":"ContainerDied","Data":"18dccc69ea12ffd53b4d4c8e312d9e5ee415348aafbce21b941019b15077a6b6"} Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.788506 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.788480844 podStartE2EDuration="3.788480844s" podCreationTimestamp="2026-01-29 17:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:10:05.769703587 +0000 UTC m=+2888.678422859" watchObservedRunningTime="2026-01-29 17:10:05.788480844 +0000 UTC m=+2888.697200126" Jan 29 17:10:05 crc kubenswrapper[4886]: I0129 17:10:05.907958 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.051926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-sb\") pod \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.052037 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-config\") pod \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.052080 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-swift-storage-0\") pod \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.052130 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-svc\") pod \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.052157 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-nb\") pod \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.052229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9csz\" (UniqueName: \"kubernetes.io/projected/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-kube-api-access-x9csz\") pod \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\" (UID: \"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1\") " Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.061579 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-kube-api-access-x9csz" (OuterVolumeSpecName: "kube-api-access-x9csz") pod "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" (UID: "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1"). InnerVolumeSpecName "kube-api-access-x9csz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.129548 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" (UID: "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.140640 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-config" (OuterVolumeSpecName: "config") pod "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" (UID: "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.155935 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.155973 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9csz\" (UniqueName: \"kubernetes.io/projected/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-kube-api-access-x9csz\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.155987 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.157868 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" (UID: "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.206766 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" (UID: "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.209741 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" (UID: "8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.258857 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.258889 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.258898 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.752904 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" event={"ID":"8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1","Type":"ContainerDied","Data":"f636861581833a86368762de32a4ca62df7734738d06a2800f3b6b0ee4fb4aa1"} Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.752960 4886 scope.go:117] "RemoveContainer" containerID="18dccc69ea12ffd53b4d4c8e312d9e5ee415348aafbce21b941019b15077a6b6" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.753524 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zdbgk" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.792685 4886 scope.go:117] "RemoveContainer" containerID="8bfd8a8fe8f520c0bdd3a5164fe133a10f3e76f19d1c34103c42b1d9ab4fdfeb" Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.809059 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zdbgk"] Jan 29 17:10:06 crc kubenswrapper[4886]: I0129 17:10:06.820555 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zdbgk"] Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.765108 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerStarted","Data":"266a8e9c96bb1b9fbb7a767f2b35ad40929d744419c9ebb7543402aacf3910b9"} Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.765470 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.765472 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-central-agent" containerID="cri-o://91baaab9d9528ba788b818d15e20639e4d6e2fffc89317503dfc698ecdb0a06c" gracePeriod=30 Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.765603 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="proxy-httpd" containerID="cri-o://266a8e9c96bb1b9fbb7a767f2b35ad40929d744419c9ebb7543402aacf3910b9" gracePeriod=30 Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.765652 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="sg-core" containerID="cri-o://189370fd8336eb715dd7e8e4fbb1c1dcacac0f2820ddab52e349e5fc03b6bbea" gracePeriod=30 Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.765693 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-notification-agent" containerID="cri-o://d4c2814ebfa5456f9a32d52477ed9133aa03f6c310e426d0feadc41c2659a8a9" gracePeriod=30 Jan 29 17:10:07 crc kubenswrapper[4886]: I0129 17:10:07.796462 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.380294784 podStartE2EDuration="9.796436499s" podCreationTimestamp="2026-01-29 17:09:58 +0000 UTC" firstStartedPulling="2026-01-29 17:09:59.534310502 +0000 UTC m=+2882.443029774" lastFinishedPulling="2026-01-29 17:10:06.950452217 +0000 UTC m=+2889.859171489" observedRunningTime="2026-01-29 17:10:07.78878708 +0000 UTC m=+2890.697506362" watchObservedRunningTime="2026-01-29 17:10:07.796436499 +0000 UTC m=+2890.705155771" Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.629713 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" path="/var/lib/kubelet/pods/8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1/volumes" Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.781471 4886 generic.go:334] "Generic (PLEG): container finished" podID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerID="266a8e9c96bb1b9fbb7a767f2b35ad40929d744419c9ebb7543402aacf3910b9" exitCode=0 Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.781528 4886 generic.go:334] "Generic (PLEG): container finished" podID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerID="189370fd8336eb715dd7e8e4fbb1c1dcacac0f2820ddab52e349e5fc03b6bbea" exitCode=2 Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.781542 4886 generic.go:334] "Generic (PLEG): container finished" podID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerID="d4c2814ebfa5456f9a32d52477ed9133aa03f6c310e426d0feadc41c2659a8a9" exitCode=0 Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.781566 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerDied","Data":"266a8e9c96bb1b9fbb7a767f2b35ad40929d744419c9ebb7543402aacf3910b9"} Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.781595 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerDied","Data":"189370fd8336eb715dd7e8e4fbb1c1dcacac0f2820ddab52e349e5fc03b6bbea"} Jan 29 17:10:08 crc kubenswrapper[4886]: I0129 17:10:08.781608 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerDied","Data":"d4c2814ebfa5456f9a32d52477ed9133aa03f6c310e426d0feadc41c2659a8a9"} Jan 29 17:10:09 crc kubenswrapper[4886]: I0129 17:10:09.801132 4886 generic.go:334] "Generic (PLEG): container finished" podID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerID="91baaab9d9528ba788b818d15e20639e4d6e2fffc89317503dfc698ecdb0a06c" exitCode=0 Jan 29 17:10:09 crc kubenswrapper[4886]: I0129 17:10:09.801177 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerDied","Data":"91baaab9d9528ba788b818d15e20639e4d6e2fffc89317503dfc698ecdb0a06c"} Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.294496 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.369494 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcsqj\" (UniqueName: \"kubernetes.io/projected/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-kube-api-access-fcsqj\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.369583 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-scripts\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.369679 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-config-data\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.369805 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-log-httpd\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.370302 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.370452 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-run-httpd\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.370678 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.370484 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-combined-ca-bundle\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.371121 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-sg-core-conf-yaml\") pod \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\" (UID: \"da502cd2-7a05-4d82-a90e-cfbd4069b0ac\") " Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.371948 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.371970 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.402506 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-scripts" (OuterVolumeSpecName: "scripts") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.402635 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-kube-api-access-fcsqj" (OuterVolumeSpecName: "kube-api-access-fcsqj") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "kube-api-access-fcsqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.460056 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.473933 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.473960 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcsqj\" (UniqueName: \"kubernetes.io/projected/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-kube-api-access-fcsqj\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.473969 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.527081 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.538022 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-config-data" (OuterVolumeSpecName: "config-data") pod "da502cd2-7a05-4d82-a90e-cfbd4069b0ac" (UID: "da502cd2-7a05-4d82-a90e-cfbd4069b0ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.581432 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.581499 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da502cd2-7a05-4d82-a90e-cfbd4069b0ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.813191 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da502cd2-7a05-4d82-a90e-cfbd4069b0ac","Type":"ContainerDied","Data":"72d7fa6925704b9669a07a61d5a64685973e8bd1e0037e203f9d28200da940d5"} Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.813232 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.813243 4886 scope.go:117] "RemoveContainer" containerID="266a8e9c96bb1b9fbb7a767f2b35ad40929d744419c9ebb7543402aacf3910b9" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.815188 4886 generic.go:334] "Generic (PLEG): container finished" podID="7a1c51cd-f91d-406b-815c-00879a9d6401" containerID="5be86521758fe7c03f20fd8b758e10774f421701b95693128fa47b2a2e5adc70" exitCode=0 Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.815230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ddfqz" event={"ID":"7a1c51cd-f91d-406b-815c-00879a9d6401","Type":"ContainerDied","Data":"5be86521758fe7c03f20fd8b758e10774f421701b95693128fa47b2a2e5adc70"} Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.881400 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.884589 4886 scope.go:117] "RemoveContainer" containerID="189370fd8336eb715dd7e8e4fbb1c1dcacac0f2820ddab52e349e5fc03b6bbea" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.895633 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.912940 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:10 crc kubenswrapper[4886]: E0129 17:10:10.913917 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="sg-core" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.913939 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="sg-core" Jan 29 17:10:10 crc kubenswrapper[4886]: E0129 17:10:10.913957 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="proxy-httpd" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.913963 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="proxy-httpd" Jan 29 17:10:10 crc kubenswrapper[4886]: E0129 17:10:10.914001 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerName="dnsmasq-dns" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914009 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerName="dnsmasq-dns" Jan 29 17:10:10 crc kubenswrapper[4886]: E0129 17:10:10.914018 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-central-agent" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914024 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-central-agent" Jan 29 17:10:10 crc kubenswrapper[4886]: E0129 17:10:10.914035 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerName="init" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914041 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerName="init" Jan 29 17:10:10 crc kubenswrapper[4886]: E0129 17:10:10.914055 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-notification-agent" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914062 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-notification-agent" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914283 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-central-agent" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914301 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="ceilometer-notification-agent" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914311 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="proxy-httpd" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914373 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccf7a7a-f65b-4942-9bfa-bc7a377e6ff1" containerName="dnsmasq-dns" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.914388 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" containerName="sg-core" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.917024 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.945362 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.945794 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.946070 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.946219 4886 scope.go:117] "RemoveContainer" containerID="d4c2814ebfa5456f9a32d52477ed9133aa03f6c310e426d0feadc41c2659a8a9" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.990698 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-run-httpd\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.990868 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.991018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77p6n\" (UniqueName: \"kubernetes.io/projected/51203b48-4909-45b6-8c3a-296fc4ee639c-kube-api-access-77p6n\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.991268 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-scripts\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.991448 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.991542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-log-httpd\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.991638 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-config-data\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:10 crc kubenswrapper[4886]: I0129 17:10:10.995911 4886 scope.go:117] "RemoveContainer" containerID="91baaab9d9528ba788b818d15e20639e4d6e2fffc89317503dfc698ecdb0a06c" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093642 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-scripts\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093799 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-log-httpd\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093839 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-config-data\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093876 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-run-httpd\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093926 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.093965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77p6n\" (UniqueName: \"kubernetes.io/projected/51203b48-4909-45b6-8c3a-296fc4ee639c-kube-api-access-77p6n\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.095088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-run-httpd\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.095394 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-log-httpd\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.098969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-scripts\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.099209 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-config-data\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.104760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.105546 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.128176 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77p6n\" (UniqueName: \"kubernetes.io/projected/51203b48-4909-45b6-8c3a-296fc4ee639c-kube-api-access-77p6n\") pod \"ceilometer-0\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.281943 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.769784 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:11 crc kubenswrapper[4886]: W0129 17:10:11.786504 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51203b48_4909_45b6_8c3a_296fc4ee639c.slice/crio-de5f49918f6704400cdc2de0d7791eff23d5b705cf50d627099de407ae90448b WatchSource:0}: Error finding container de5f49918f6704400cdc2de0d7791eff23d5b705cf50d627099de407ae90448b: Status 404 returned error can't find the container with id de5f49918f6704400cdc2de0d7791eff23d5b705cf50d627099de407ae90448b Jan 29 17:10:11 crc kubenswrapper[4886]: I0129 17:10:11.827987 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerStarted","Data":"de5f49918f6704400cdc2de0d7791eff23d5b705cf50d627099de407ae90448b"} Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.324223 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.423207 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-combined-ca-bundle\") pod \"7a1c51cd-f91d-406b-815c-00879a9d6401\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.423511 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-config-data\") pod \"7a1c51cd-f91d-406b-815c-00879a9d6401\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.423571 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-scripts\") pod \"7a1c51cd-f91d-406b-815c-00879a9d6401\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.423782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwxx4\" (UniqueName: \"kubernetes.io/projected/7a1c51cd-f91d-406b-815c-00879a9d6401-kube-api-access-xwxx4\") pod \"7a1c51cd-f91d-406b-815c-00879a9d6401\" (UID: \"7a1c51cd-f91d-406b-815c-00879a9d6401\") " Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.427453 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1c51cd-f91d-406b-815c-00879a9d6401-kube-api-access-xwxx4" (OuterVolumeSpecName: "kube-api-access-xwxx4") pod "7a1c51cd-f91d-406b-815c-00879a9d6401" (UID: "7a1c51cd-f91d-406b-815c-00879a9d6401"). InnerVolumeSpecName "kube-api-access-xwxx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.430471 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-scripts" (OuterVolumeSpecName: "scripts") pod "7a1c51cd-f91d-406b-815c-00879a9d6401" (UID: "7a1c51cd-f91d-406b-815c-00879a9d6401"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.456426 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a1c51cd-f91d-406b-815c-00879a9d6401" (UID: "7a1c51cd-f91d-406b-815c-00879a9d6401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.457573 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-config-data" (OuterVolumeSpecName: "config-data") pod "7a1c51cd-f91d-406b-815c-00879a9d6401" (UID: "7a1c51cd-f91d-406b-815c-00879a9d6401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.526772 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwxx4\" (UniqueName: \"kubernetes.io/projected/7a1c51cd-f91d-406b-815c-00879a9d6401-kube-api-access-xwxx4\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.527001 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.527087 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.527147 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a1c51cd-f91d-406b-815c-00879a9d6401-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.627248 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da502cd2-7a05-4d82-a90e-cfbd4069b0ac" path="/var/lib/kubelet/pods/da502cd2-7a05-4d82-a90e-cfbd4069b0ac/volumes" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.780841 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.780925 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.788924 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.790542 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.841443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerStarted","Data":"c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e"} Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.842592 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ddfqz" Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.842580 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ddfqz" event={"ID":"7a1c51cd-f91d-406b-815c-00879a9d6401","Type":"ContainerDied","Data":"f1662f2f91761a984c86477b3a390f7b3bd8f222aea924e68ce2bb82b98bbf96"} Jan 29 17:10:12 crc kubenswrapper[4886]: I0129 17:10:12.842781 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1662f2f91761a984c86477b3a390f7b3bd8f222aea924e68ce2bb82b98bbf96" Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.034756 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.035261 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-log" containerID="cri-o://297512a17905e8884ba2dee2e1bd0e97f5fbde7e67ab2e041189401e3a8b1069" gracePeriod=30 Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.035495 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-api" containerID="cri-o://5279babaff011b0a7c0724784680ba960a9fce4465f977efe275f3b290d89fab" gracePeriod=30 Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.048486 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.048695 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dd8b58c7-942f-4f89-88a0-ce374fd98f0b" containerName="nova-scheduler-scheduler" containerID="cri-o://9734db9b6c351c8b935d8796b19514bcaecf82f2265e11ccf340fb3e8e4c7834" gracePeriod=30 Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.092588 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.854950 4886 generic.go:334] "Generic (PLEG): container finished" podID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerID="5279babaff011b0a7c0724784680ba960a9fce4465f977efe275f3b290d89fab" exitCode=0 Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.855539 4886 generic.go:334] "Generic (PLEG): container finished" podID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerID="297512a17905e8884ba2dee2e1bd0e97f5fbde7e67ab2e041189401e3a8b1069" exitCode=143 Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.855021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b515f59a-4b3a-4821-bbec-8e622a8164e6","Type":"ContainerDied","Data":"5279babaff011b0a7c0724784680ba960a9fce4465f977efe275f3b290d89fab"} Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.855669 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b515f59a-4b3a-4821-bbec-8e622a8164e6","Type":"ContainerDied","Data":"297512a17905e8884ba2dee2e1bd0e97f5fbde7e67ab2e041189401e3a8b1069"} Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.855695 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b515f59a-4b3a-4821-bbec-8e622a8164e6","Type":"ContainerDied","Data":"3b5aab9a83beedb9411f1928c81b699649b72f9a5c36a34dc864ad27dbc02c85"} Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.855707 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5aab9a83beedb9411f1928c81b699649b72f9a5c36a34dc864ad27dbc02c85" Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.862252 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerStarted","Data":"af32cb3d4cad94fb3c21ee16283db0307dd6a80318541f4accfe0f6d97cb6b84"} Jan 29 17:10:13 crc kubenswrapper[4886]: I0129 17:10:13.958622 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.081902 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b515f59a-4b3a-4821-bbec-8e622a8164e6-logs\") pod \"b515f59a-4b3a-4821-bbec-8e622a8164e6\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.082032 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-internal-tls-certs\") pod \"b515f59a-4b3a-4821-bbec-8e622a8164e6\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.082063 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-combined-ca-bundle\") pod \"b515f59a-4b3a-4821-bbec-8e622a8164e6\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.082080 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-public-tls-certs\") pod \"b515f59a-4b3a-4821-bbec-8e622a8164e6\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.082123 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-config-data\") pod \"b515f59a-4b3a-4821-bbec-8e622a8164e6\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.082153 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r59zt\" (UniqueName: \"kubernetes.io/projected/b515f59a-4b3a-4821-bbec-8e622a8164e6-kube-api-access-r59zt\") pod \"b515f59a-4b3a-4821-bbec-8e622a8164e6\" (UID: \"b515f59a-4b3a-4821-bbec-8e622a8164e6\") " Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.082941 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b515f59a-4b3a-4821-bbec-8e622a8164e6-logs" (OuterVolumeSpecName: "logs") pod "b515f59a-4b3a-4821-bbec-8e622a8164e6" (UID: "b515f59a-4b3a-4821-bbec-8e622a8164e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.102824 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b515f59a-4b3a-4821-bbec-8e622a8164e6-kube-api-access-r59zt" (OuterVolumeSpecName: "kube-api-access-r59zt") pod "b515f59a-4b3a-4821-bbec-8e622a8164e6" (UID: "b515f59a-4b3a-4821-bbec-8e622a8164e6"). InnerVolumeSpecName "kube-api-access-r59zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.131508 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-config-data" (OuterVolumeSpecName: "config-data") pod "b515f59a-4b3a-4821-bbec-8e622a8164e6" (UID: "b515f59a-4b3a-4821-bbec-8e622a8164e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.160691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b515f59a-4b3a-4821-bbec-8e622a8164e6" (UID: "b515f59a-4b3a-4821-bbec-8e622a8164e6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.161007 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b515f59a-4b3a-4821-bbec-8e622a8164e6" (UID: "b515f59a-4b3a-4821-bbec-8e622a8164e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.185127 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.185169 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r59zt\" (UniqueName: \"kubernetes.io/projected/b515f59a-4b3a-4821-bbec-8e622a8164e6-kube-api-access-r59zt\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.185181 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b515f59a-4b3a-4821-bbec-8e622a8164e6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.185190 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.185198 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.208496 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b515f59a-4b3a-4821-bbec-8e622a8164e6" (UID: "b515f59a-4b3a-4821-bbec-8e622a8164e6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.295495 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b515f59a-4b3a-4821-bbec-8e622a8164e6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.874726 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerStarted","Data":"6c975034f363da994f8f028b9f44a46d5e4b43e5df94d066fa0723bd5320a3f5"} Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.874848 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.875077 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-metadata" containerID="cri-o://cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9" gracePeriod=30 Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.875031 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-log" containerID="cri-o://5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc" gracePeriod=30 Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.925090 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.937631 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.957381 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:14 crc kubenswrapper[4886]: E0129 17:10:14.957924 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-log" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.957944 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-log" Jan 29 17:10:14 crc kubenswrapper[4886]: E0129 17:10:14.957957 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-api" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.957965 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-api" Jan 29 17:10:14 crc kubenswrapper[4886]: E0129 17:10:14.958001 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1c51cd-f91d-406b-815c-00879a9d6401" containerName="nova-manage" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.958007 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1c51cd-f91d-406b-815c-00879a9d6401" containerName="nova-manage" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.958219 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-log" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.958242 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" containerName="nova-api-api" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.958260 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1c51cd-f91d-406b-815c-00879a9d6401" containerName="nova-manage" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.959613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.961571 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.961676 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.961714 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 17:10:14 crc kubenswrapper[4886]: I0129 17:10:14.989486 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.010539 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbffe358-e916-4693-b76d-09fd332a7082-logs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.010600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.010649 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpk7z\" (UniqueName: \"kubernetes.io/projected/cbffe358-e916-4693-b76d-09fd332a7082-kube-api-access-fpk7z\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.010690 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-config-data\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.010809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-public-tls-certs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.010831 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.107891 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b7z4z" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="registry-server" probeResult="failure" output=< Jan 29 17:10:15 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:10:15 crc kubenswrapper[4886]: > Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.112863 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.112938 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpk7z\" (UniqueName: \"kubernetes.io/projected/cbffe358-e916-4693-b76d-09fd332a7082-kube-api-access-fpk7z\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.112977 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-config-data\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.113060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-public-tls-certs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.113080 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.113169 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbffe358-e916-4693-b76d-09fd332a7082-logs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.113569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbffe358-e916-4693-b76d-09fd332a7082-logs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.119041 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-config-data\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.119511 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.120854 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.127926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbffe358-e916-4693-b76d-09fd332a7082-public-tls-certs\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.136176 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpk7z\" (UniqueName: \"kubernetes.io/projected/cbffe358-e916-4693-b76d-09fd332a7082-kube-api-access-fpk7z\") pod \"nova-api-0\" (UID: \"cbffe358-e916-4693-b76d-09fd332a7082\") " pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.279031 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.803344 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.888557 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbffe358-e916-4693-b76d-09fd332a7082","Type":"ContainerStarted","Data":"e88fb79a196e941fabd58fb768bad1edc1da992688c9b33a1a1e6122f1242cb4"} Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.890066 4886 generic.go:334] "Generic (PLEG): container finished" podID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerID="5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc" exitCode=143 Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.890111 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba13f7f-cb9d-4147-9f9d-982bd5daac77","Type":"ContainerDied","Data":"5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc"} Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.891295 4886 generic.go:334] "Generic (PLEG): container finished" podID="dd8b58c7-942f-4f89-88a0-ce374fd98f0b" containerID="9734db9b6c351c8b935d8796b19514bcaecf82f2265e11ccf340fb3e8e4c7834" exitCode=0 Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.891318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd8b58c7-942f-4f89-88a0-ce374fd98f0b","Type":"ContainerDied","Data":"9734db9b6c351c8b935d8796b19514bcaecf82f2265e11ccf340fb3e8e4c7834"} Jan 29 17:10:15 crc kubenswrapper[4886]: I0129 17:10:15.990245 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.143622 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-combined-ca-bundle\") pod \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.143736 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-config-data\") pod \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.143923 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wgk9\" (UniqueName: \"kubernetes.io/projected/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-kube-api-access-4wgk9\") pod \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\" (UID: \"dd8b58c7-942f-4f89-88a0-ce374fd98f0b\") " Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.148582 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-kube-api-access-4wgk9" (OuterVolumeSpecName: "kube-api-access-4wgk9") pod "dd8b58c7-942f-4f89-88a0-ce374fd98f0b" (UID: "dd8b58c7-942f-4f89-88a0-ce374fd98f0b"). InnerVolumeSpecName "kube-api-access-4wgk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.183421 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-config-data" (OuterVolumeSpecName: "config-data") pod "dd8b58c7-942f-4f89-88a0-ce374fd98f0b" (UID: "dd8b58c7-942f-4f89-88a0-ce374fd98f0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.199079 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd8b58c7-942f-4f89-88a0-ce374fd98f0b" (UID: "dd8b58c7-942f-4f89-88a0-ce374fd98f0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.246989 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wgk9\" (UniqueName: \"kubernetes.io/projected/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-kube-api-access-4wgk9\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.247021 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.247031 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b58c7-942f-4f89-88a0-ce374fd98f0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.643618 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b515f59a-4b3a-4821-bbec-8e622a8164e6" path="/var/lib/kubelet/pods/b515f59a-4b3a-4821-bbec-8e622a8164e6/volumes" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.912123 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd8b58c7-942f-4f89-88a0-ce374fd98f0b","Type":"ContainerDied","Data":"c2ea7d41eadeb9e0900ac95c53b4acc74be8017115cf4e43325000be7c90063b"} Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.912175 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.912196 4886 scope.go:117] "RemoveContainer" containerID="9734db9b6c351c8b935d8796b19514bcaecf82f2265e11ccf340fb3e8e4c7834" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.918770 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbffe358-e916-4693-b76d-09fd332a7082","Type":"ContainerStarted","Data":"f74f4068a780ec3e97c028d30192a5c360c29d9e96ab00f973a4915e0a4ec0b6"} Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.918805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbffe358-e916-4693-b76d-09fd332a7082","Type":"ContainerStarted","Data":"c93af99841322471d4d39b5b6ef50088a4e01be653dbd6536ee4b3e2038de5e2"} Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.926643 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerStarted","Data":"01c6694fd4df1d797b97e25cbe9f80e6eca4f580fbbf77224f8cc99225251a03"} Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.927660 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.962131 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.962106245 podStartE2EDuration="2.962106245s" podCreationTimestamp="2026-01-29 17:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:10:16.943551324 +0000 UTC m=+2899.852270596" watchObservedRunningTime="2026-01-29 17:10:16.962106245 +0000 UTC m=+2899.870825517" Jan 29 17:10:16 crc kubenswrapper[4886]: I0129 17:10:16.975625 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.398555709 podStartE2EDuration="6.975605401s" podCreationTimestamp="2026-01-29 17:10:10 +0000 UTC" firstStartedPulling="2026-01-29 17:10:11.797825743 +0000 UTC m=+2894.706545015" lastFinishedPulling="2026-01-29 17:10:16.374875435 +0000 UTC m=+2899.283594707" observedRunningTime="2026-01-29 17:10:16.958720568 +0000 UTC m=+2899.867439830" watchObservedRunningTime="2026-01-29 17:10:16.975605401 +0000 UTC m=+2899.884324683" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.008125 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.021969 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.032273 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:10:17 crc kubenswrapper[4886]: E0129 17:10:17.032842 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8b58c7-942f-4f89-88a0-ce374fd98f0b" containerName="nova-scheduler-scheduler" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.032864 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8b58c7-942f-4f89-88a0-ce374fd98f0b" containerName="nova-scheduler-scheduler" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.033060 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8b58c7-942f-4f89-88a0-ce374fd98f0b" containerName="nova-scheduler-scheduler" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.033885 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.036749 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.042000 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.172460 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c563c-21d3-41cf-aabf-dd4429d59b62-config-data\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.172554 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c563c-21d3-41cf-aabf-dd4429d59b62-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.172755 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfhj\" (UniqueName: \"kubernetes.io/projected/fc4c563c-21d3-41cf-aabf-dd4429d59b62-kube-api-access-bbfhj\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.274555 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c563c-21d3-41cf-aabf-dd4429d59b62-config-data\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.274626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c563c-21d3-41cf-aabf-dd4429d59b62-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.274766 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfhj\" (UniqueName: \"kubernetes.io/projected/fc4c563c-21d3-41cf-aabf-dd4429d59b62-kube-api-access-bbfhj\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.280443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c563c-21d3-41cf-aabf-dd4429d59b62-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.283021 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c563c-21d3-41cf-aabf-dd4429d59b62-config-data\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.299524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfhj\" (UniqueName: \"kubernetes.io/projected/fc4c563c-21d3-41cf-aabf-dd4429d59b62-kube-api-access-bbfhj\") pod \"nova-scheduler-0\" (UID: \"fc4c563c-21d3-41cf-aabf-dd4429d59b62\") " pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.359946 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.937735 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:10:17 crc kubenswrapper[4886]: I0129 17:10:17.951598 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc4c563c-21d3-41cf-aabf-dd4429d59b62","Type":"ContainerStarted","Data":"b39c129d992b6913ea3b322e36d56792fb7f27e379c2c13f26ce269ac248fa3f"} Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.016488 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": read tcp 10.217.0.2:53798->10.217.1.10:8775: read: connection reset by peer" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.016619 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": read tcp 10.217.0.2:53814->10.217.1.10:8775: read: connection reset by peer" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.533012 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.605715 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-combined-ca-bundle\") pod \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.605897 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-config-data\") pod \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.605984 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-nova-metadata-tls-certs\") pod \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.606083 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-logs\") pod \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.606135 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr8p2\" (UniqueName: \"kubernetes.io/projected/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-kube-api-access-dr8p2\") pod \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\" (UID: \"6ba13f7f-cb9d-4147-9f9d-982bd5daac77\") " Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.606853 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-logs" (OuterVolumeSpecName: "logs") pod "6ba13f7f-cb9d-4147-9f9d-982bd5daac77" (UID: "6ba13f7f-cb9d-4147-9f9d-982bd5daac77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.616414 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-kube-api-access-dr8p2" (OuterVolumeSpecName: "kube-api-access-dr8p2") pod "6ba13f7f-cb9d-4147-9f9d-982bd5daac77" (UID: "6ba13f7f-cb9d-4147-9f9d-982bd5daac77"). InnerVolumeSpecName "kube-api-access-dr8p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.648250 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8b58c7-942f-4f89-88a0-ce374fd98f0b" path="/var/lib/kubelet/pods/dd8b58c7-942f-4f89-88a0-ce374fd98f0b/volumes" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.671798 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-config-data" (OuterVolumeSpecName: "config-data") pod "6ba13f7f-cb9d-4147-9f9d-982bd5daac77" (UID: "6ba13f7f-cb9d-4147-9f9d-982bd5daac77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.685230 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ba13f7f-cb9d-4147-9f9d-982bd5daac77" (UID: "6ba13f7f-cb9d-4147-9f9d-982bd5daac77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.706799 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6ba13f7f-cb9d-4147-9f9d-982bd5daac77" (UID: "6ba13f7f-cb9d-4147-9f9d-982bd5daac77"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.709746 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.709794 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.709808 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.709819 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr8p2\" (UniqueName: \"kubernetes.io/projected/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-kube-api-access-dr8p2\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.709830 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba13f7f-cb9d-4147-9f9d-982bd5daac77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.987440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc4c563c-21d3-41cf-aabf-dd4429d59b62","Type":"ContainerStarted","Data":"5a7fd94ae03c209702afc1ec138d28e079580d82a66e66b5e311b5a921afa695"} Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.990456 4886 generic.go:334] "Generic (PLEG): container finished" podID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerID="cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9" exitCode=0 Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.990486 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba13f7f-cb9d-4147-9f9d-982bd5daac77","Type":"ContainerDied","Data":"cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9"} Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.990509 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba13f7f-cb9d-4147-9f9d-982bd5daac77","Type":"ContainerDied","Data":"590686b9473f5c18e61b69cef7feee9a7b36c136560c55bdbbed141a70bc112d"} Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.990525 4886 scope.go:117] "RemoveContainer" containerID="cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9" Jan 29 17:10:18 crc kubenswrapper[4886]: I0129 17:10:18.990606 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.005408 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.005393719 podStartE2EDuration="3.005393719s" podCreationTimestamp="2026-01-29 17:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:10:19.004257097 +0000 UTC m=+2901.912976379" watchObservedRunningTime="2026-01-29 17:10:19.005393719 +0000 UTC m=+2901.914112991" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.042091 4886 scope.go:117] "RemoveContainer" containerID="5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.054586 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.083859 4886 scope.go:117] "RemoveContainer" containerID="cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9" Jan 29 17:10:19 crc kubenswrapper[4886]: E0129 17:10:19.084501 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9\": container with ID starting with cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9 not found: ID does not exist" containerID="cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.084567 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9"} err="failed to get container status \"cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9\": rpc error: code = NotFound desc = could not find container \"cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9\": container with ID starting with cd779590c513b85f1be24ee1be77a1addf20dbbca3b8eb0c655a6287c5d23cb9 not found: ID does not exist" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.084593 4886 scope.go:117] "RemoveContainer" containerID="5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc" Jan 29 17:10:19 crc kubenswrapper[4886]: E0129 17:10:19.084932 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc\": container with ID starting with 5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc not found: ID does not exist" containerID="5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.084951 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc"} err="failed to get container status \"5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc\": rpc error: code = NotFound desc = could not find container \"5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc\": container with ID starting with 5b523a0231e956d5db224e5c8db2f3e8aaf553d5abc7de07ad05e39c231cc3fc not found: ID does not exist" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.105563 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.136394 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:10:19 crc kubenswrapper[4886]: E0129 17:10:19.136970 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-log" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.136990 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-log" Jan 29 17:10:19 crc kubenswrapper[4886]: E0129 17:10:19.137013 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-metadata" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.137019 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-metadata" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.137259 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-log" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.137274 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" containerName="nova-metadata-metadata" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.138787 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.141602 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.144773 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.154455 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.242217 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.242287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkw6\" (UniqueName: \"kubernetes.io/projected/9a568175-84cc-425a-9adf-5013a7fb5171-kube-api-access-fqkw6\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.242740 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-config-data\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.243123 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a568175-84cc-425a-9adf-5013a7fb5171-logs\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.243279 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.345528 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-config-data\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.345674 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a568175-84cc-425a-9adf-5013a7fb5171-logs\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.345733 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.345764 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.345789 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkw6\" (UniqueName: \"kubernetes.io/projected/9a568175-84cc-425a-9adf-5013a7fb5171-kube-api-access-fqkw6\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.347055 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a568175-84cc-425a-9adf-5013a7fb5171-logs\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.354836 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.358805 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.359320 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a568175-84cc-425a-9adf-5013a7fb5171-config-data\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.367152 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkw6\" (UniqueName: \"kubernetes.io/projected/9a568175-84cc-425a-9adf-5013a7fb5171-kube-api-access-fqkw6\") pod \"nova-metadata-0\" (UID: \"9a568175-84cc-425a-9adf-5013a7fb5171\") " pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.472727 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:10:19 crc kubenswrapper[4886]: I0129 17:10:19.983856 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:10:19 crc kubenswrapper[4886]: W0129 17:10:19.987082 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a568175_84cc_425a_9adf_5013a7fb5171.slice/crio-c6aa4f4fb07bcd94dc23abe25c30aa14a5a66175f93a187f3b25d31820287687 WatchSource:0}: Error finding container c6aa4f4fb07bcd94dc23abe25c30aa14a5a66175f93a187f3b25d31820287687: Status 404 returned error can't find the container with id c6aa4f4fb07bcd94dc23abe25c30aa14a5a66175f93a187f3b25d31820287687 Jan 29 17:10:20 crc kubenswrapper[4886]: I0129 17:10:20.006612 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a568175-84cc-425a-9adf-5013a7fb5171","Type":"ContainerStarted","Data":"c6aa4f4fb07bcd94dc23abe25c30aa14a5a66175f93a187f3b25d31820287687"} Jan 29 17:10:20 crc kubenswrapper[4886]: I0129 17:10:20.633498 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba13f7f-cb9d-4147-9f9d-982bd5daac77" path="/var/lib/kubelet/pods/6ba13f7f-cb9d-4147-9f9d-982bd5daac77/volumes" Jan 29 17:10:21 crc kubenswrapper[4886]: I0129 17:10:21.024246 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a568175-84cc-425a-9adf-5013a7fb5171","Type":"ContainerStarted","Data":"8acefa6d1c42b715e1b3c36b2826a0a57ac2cf2b1a2590a3dff7b817d637c904"} Jan 29 17:10:21 crc kubenswrapper[4886]: I0129 17:10:21.024326 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a568175-84cc-425a-9adf-5013a7fb5171","Type":"ContainerStarted","Data":"b9fdf8d8e8bdb1ef81e9be52cdb85659ac35f6333566df9a59096924dc10bd8f"} Jan 29 17:10:21 crc kubenswrapper[4886]: I0129 17:10:21.050503 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.050483617 podStartE2EDuration="2.050483617s" podCreationTimestamp="2026-01-29 17:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:10:21.045177795 +0000 UTC m=+2903.953897127" watchObservedRunningTime="2026-01-29 17:10:21.050483617 +0000 UTC m=+2903.959202909" Jan 29 17:10:22 crc kubenswrapper[4886]: I0129 17:10:22.360094 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 17:10:24 crc kubenswrapper[4886]: I0129 17:10:24.113733 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:10:24 crc kubenswrapper[4886]: I0129 17:10:24.163534 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:10:24 crc kubenswrapper[4886]: I0129 17:10:24.473784 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:10:24 crc kubenswrapper[4886]: I0129 17:10:24.473879 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:10:24 crc kubenswrapper[4886]: I0129 17:10:24.879109 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7z4z"] Jan 29 17:10:25 crc kubenswrapper[4886]: I0129 17:10:25.279776 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:10:25 crc kubenswrapper[4886]: I0129 17:10:25.279840 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:10:26 crc kubenswrapper[4886]: I0129 17:10:26.088629 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b7z4z" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="registry-server" containerID="cri-o://4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede" gracePeriod=2 Jan 29 17:10:26 crc kubenswrapper[4886]: I0129 17:10:26.293704 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cbffe358-e916-4693-b76d-09fd332a7082" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:10:26 crc kubenswrapper[4886]: I0129 17:10:26.293706 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cbffe358-e916-4693-b76d-09fd332a7082" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:10:26 crc kubenswrapper[4886]: I0129 17:10:26.892890 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.043320 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-catalog-content\") pod \"265d5adc-ace5-4008-99d5-206b5182e6d4\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.044011 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxvc\" (UniqueName: \"kubernetes.io/projected/265d5adc-ace5-4008-99d5-206b5182e6d4-kube-api-access-xkxvc\") pod \"265d5adc-ace5-4008-99d5-206b5182e6d4\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.044233 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-utilities\") pod \"265d5adc-ace5-4008-99d5-206b5182e6d4\" (UID: \"265d5adc-ace5-4008-99d5-206b5182e6d4\") " Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.044778 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-utilities" (OuterVolumeSpecName: "utilities") pod "265d5adc-ace5-4008-99d5-206b5182e6d4" (UID: "265d5adc-ace5-4008-99d5-206b5182e6d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.045433 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.054113 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265d5adc-ace5-4008-99d5-206b5182e6d4-kube-api-access-xkxvc" (OuterVolumeSpecName: "kube-api-access-xkxvc") pod "265d5adc-ace5-4008-99d5-206b5182e6d4" (UID: "265d5adc-ace5-4008-99d5-206b5182e6d4"). InnerVolumeSpecName "kube-api-access-xkxvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.107072 4886 generic.go:334] "Generic (PLEG): container finished" podID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerID="4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede" exitCode=0 Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.107113 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerDied","Data":"4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede"} Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.107140 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7z4z" event={"ID":"265d5adc-ace5-4008-99d5-206b5182e6d4","Type":"ContainerDied","Data":"b49a773367da81a381e19a2ba4ecf2f2565cbe6beacc718a457751390e647a71"} Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.107157 4886 scope.go:117] "RemoveContainer" containerID="4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.107297 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7z4z" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.120746 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "265d5adc-ace5-4008-99d5-206b5182e6d4" (UID: "265d5adc-ace5-4008-99d5-206b5182e6d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.127297 4886 scope.go:117] "RemoveContainer" containerID="3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.147263 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265d5adc-ace5-4008-99d5-206b5182e6d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.147314 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxvc\" (UniqueName: \"kubernetes.io/projected/265d5adc-ace5-4008-99d5-206b5182e6d4-kube-api-access-xkxvc\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.155840 4886 scope.go:117] "RemoveContainer" containerID="c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.206110 4886 scope.go:117] "RemoveContainer" containerID="4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede" Jan 29 17:10:27 crc kubenswrapper[4886]: E0129 17:10:27.206552 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede\": container with ID starting with 4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede not found: ID does not exist" containerID="4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.206587 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede"} err="failed to get container status \"4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede\": rpc error: code = NotFound desc = could not find container \"4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede\": container with ID starting with 4f918436d3a4458be4f1385c7fcfd7781d59051384022442109a970fd2117ede not found: ID does not exist" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.206609 4886 scope.go:117] "RemoveContainer" containerID="3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173" Jan 29 17:10:27 crc kubenswrapper[4886]: E0129 17:10:27.206823 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173\": container with ID starting with 3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173 not found: ID does not exist" containerID="3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.206848 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173"} err="failed to get container status \"3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173\": rpc error: code = NotFound desc = could not find container \"3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173\": container with ID starting with 3348e603d16bdd075d9fa10e25af3a479e537e3ba1e85926303e7efb2d68b173 not found: ID does not exist" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.206863 4886 scope.go:117] "RemoveContainer" containerID="c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368" Jan 29 17:10:27 crc kubenswrapper[4886]: E0129 17:10:27.207219 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368\": container with ID starting with c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368 not found: ID does not exist" containerID="c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.207244 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368"} err="failed to get container status \"c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368\": rpc error: code = NotFound desc = could not find container \"c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368\": container with ID starting with c1dd6ae46daebf75b61de05db1d9dcf57ca090cd74e3c93bdef7a80a5b1e0368 not found: ID does not exist" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.360709 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.424290 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.484114 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7z4z"] Jan 29 17:10:27 crc kubenswrapper[4886]: I0129 17:10:27.495787 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b7z4z"] Jan 29 17:10:28 crc kubenswrapper[4886]: I0129 17:10:28.156488 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 17:10:28 crc kubenswrapper[4886]: I0129 17:10:28.647168 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" path="/var/lib/kubelet/pods/265d5adc-ace5-4008-99d5-206b5182e6d4/volumes" Jan 29 17:10:29 crc kubenswrapper[4886]: I0129 17:10:29.472993 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 17:10:29 crc kubenswrapper[4886]: I0129 17:10:29.473265 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 17:10:30 crc kubenswrapper[4886]: I0129 17:10:30.489503 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9a568175-84cc-425a-9adf-5013a7fb5171" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.19:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:10:30 crc kubenswrapper[4886]: I0129 17:10:30.489526 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9a568175-84cc-425a-9adf-5013a7fb5171" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.19:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:10:35 crc kubenswrapper[4886]: I0129 17:10:35.286811 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 17:10:35 crc kubenswrapper[4886]: I0129 17:10:35.287636 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 17:10:35 crc kubenswrapper[4886]: I0129 17:10:35.288549 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 17:10:35 crc kubenswrapper[4886]: I0129 17:10:35.297615 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.043088 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-84jbh"] Jan 29 17:10:36 crc kubenswrapper[4886]: E0129 17:10:36.043932 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="registry-server" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.043949 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="registry-server" Jan 29 17:10:36 crc kubenswrapper[4886]: E0129 17:10:36.043970 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="extract-utilities" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.043980 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="extract-utilities" Jan 29 17:10:36 crc kubenswrapper[4886]: E0129 17:10:36.043992 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="extract-content" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.044000 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="extract-content" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.044269 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="265d5adc-ace5-4008-99d5-206b5182e6d4" containerName="registry-server" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.048109 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.058313 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84jbh"] Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.178227 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-catalog-content\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.178497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-utilities\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.178638 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96q7f\" (UniqueName: \"kubernetes.io/projected/217e65b9-b1b5-4244-930b-b85bc2e0a948-kube-api-access-96q7f\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.209271 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.215839 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.280611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-utilities\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.280684 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96q7f\" (UniqueName: \"kubernetes.io/projected/217e65b9-b1b5-4244-930b-b85bc2e0a948-kube-api-access-96q7f\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.280897 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-catalog-content\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.281200 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-utilities\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.281347 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-catalog-content\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.306363 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96q7f\" (UniqueName: \"kubernetes.io/projected/217e65b9-b1b5-4244-930b-b85bc2e0a948-kube-api-access-96q7f\") pod \"redhat-marketplace-84jbh\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.393555 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:36 crc kubenswrapper[4886]: I0129 17:10:36.925620 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84jbh"] Jan 29 17:10:36 crc kubenswrapper[4886]: W0129 17:10:36.932829 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217e65b9_b1b5_4244_930b_b85bc2e0a948.slice/crio-de22974c151bb71b46851f3f7e77eea61bb7ffc33602315145dbd816afde3589 WatchSource:0}: Error finding container de22974c151bb71b46851f3f7e77eea61bb7ffc33602315145dbd816afde3589: Status 404 returned error can't find the container with id de22974c151bb71b46851f3f7e77eea61bb7ffc33602315145dbd816afde3589 Jan 29 17:10:37 crc kubenswrapper[4886]: I0129 17:10:37.219548 4886 generic.go:334] "Generic (PLEG): container finished" podID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerID="44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707" exitCode=0 Jan 29 17:10:37 crc kubenswrapper[4886]: I0129 17:10:37.221666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerDied","Data":"44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707"} Jan 29 17:10:37 crc kubenswrapper[4886]: I0129 17:10:37.221743 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerStarted","Data":"de22974c151bb71b46851f3f7e77eea61bb7ffc33602315145dbd816afde3589"} Jan 29 17:10:38 crc kubenswrapper[4886]: I0129 17:10:38.238184 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerStarted","Data":"a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146"} Jan 29 17:10:39 crc kubenswrapper[4886]: I0129 17:10:39.479665 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 17:10:39 crc kubenswrapper[4886]: I0129 17:10:39.481709 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 17:10:39 crc kubenswrapper[4886]: I0129 17:10:39.484431 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 17:10:40 crc kubenswrapper[4886]: I0129 17:10:40.255217 4886 generic.go:334] "Generic (PLEG): container finished" podID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerID="a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146" exitCode=0 Jan 29 17:10:40 crc kubenswrapper[4886]: I0129 17:10:40.255319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerDied","Data":"a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146"} Jan 29 17:10:40 crc kubenswrapper[4886]: I0129 17:10:40.270040 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 17:10:41 crc kubenswrapper[4886]: I0129 17:10:41.270003 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerStarted","Data":"1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1"} Jan 29 17:10:41 crc kubenswrapper[4886]: I0129 17:10:41.311818 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 17:10:41 crc kubenswrapper[4886]: I0129 17:10:41.322801 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-84jbh" podStartSLOduration=1.877611385 podStartE2EDuration="5.322777416s" podCreationTimestamp="2026-01-29 17:10:36 +0000 UTC" firstStartedPulling="2026-01-29 17:10:37.222067491 +0000 UTC m=+2920.130786763" lastFinishedPulling="2026-01-29 17:10:40.667233522 +0000 UTC m=+2923.575952794" observedRunningTime="2026-01-29 17:10:41.289625187 +0000 UTC m=+2924.198344469" watchObservedRunningTime="2026-01-29 17:10:41.322777416 +0000 UTC m=+2924.231496698" Jan 29 17:10:45 crc kubenswrapper[4886]: I0129 17:10:45.358953 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:10:45 crc kubenswrapper[4886]: I0129 17:10:45.359774 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" containerName="kube-state-metrics" containerID="cri-o://27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2" gracePeriod=30 Jan 29 17:10:45 crc kubenswrapper[4886]: I0129 17:10:45.499353 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:10:45 crc kubenswrapper[4886]: I0129 17:10:45.500177 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" containerName="mysqld-exporter" containerID="cri-o://2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66" gracePeriod=30 Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.207738 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.214760 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.330083 4886 generic.go:334] "Generic (PLEG): container finished" podID="f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" containerID="2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66" exitCode=2 Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.330140 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.330153 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e","Type":"ContainerDied","Data":"2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66"} Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.330182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e","Type":"ContainerDied","Data":"a4b442eb660a759ea9b06148625ca4e079373c7e47cea96d0478208100ae22a9"} Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.330201 4886 scope.go:117] "RemoveContainer" containerID="2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.333165 4886 generic.go:334] "Generic (PLEG): container finished" podID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" containerID="27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2" exitCode=2 Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.333198 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dba0c99a-0f14-42bd-8822-ee79fc73ee41","Type":"ContainerDied","Data":"27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2"} Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.333203 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.333221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dba0c99a-0f14-42bd-8822-ee79fc73ee41","Type":"ContainerDied","Data":"e23683912c13c24ac6376c0e92dd23177282cc9bf4441644e7ddbf8a433b486b"} Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.370446 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrp8r\" (UniqueName: \"kubernetes.io/projected/dba0c99a-0f14-42bd-8822-ee79fc73ee41-kube-api-access-xrp8r\") pod \"dba0c99a-0f14-42bd-8822-ee79fc73ee41\" (UID: \"dba0c99a-0f14-42bd-8822-ee79fc73ee41\") " Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.370704 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-config-data\") pod \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.370905 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-combined-ca-bundle\") pod \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.370926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v52w\" (UniqueName: \"kubernetes.io/projected/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-kube-api-access-5v52w\") pod \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\" (UID: \"f0d54f6d-4531-4707-8c1a-aed5e0e36d0e\") " Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.372487 4886 scope.go:117] "RemoveContainer" containerID="2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66" Jan 29 17:10:46 crc kubenswrapper[4886]: E0129 17:10:46.373137 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66\": container with ID starting with 2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66 not found: ID does not exist" containerID="2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.373186 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66"} err="failed to get container status \"2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66\": rpc error: code = NotFound desc = could not find container \"2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66\": container with ID starting with 2df9bc2e05bc1630cc3e5fb6a640fa85bdf65d2d98be5d0f01536073ed245e66 not found: ID does not exist" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.373210 4886 scope.go:117] "RemoveContainer" containerID="27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.378169 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-kube-api-access-5v52w" (OuterVolumeSpecName: "kube-api-access-5v52w") pod "f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" (UID: "f0d54f6d-4531-4707-8c1a-aed5e0e36d0e"). InnerVolumeSpecName "kube-api-access-5v52w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.383455 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba0c99a-0f14-42bd-8822-ee79fc73ee41-kube-api-access-xrp8r" (OuterVolumeSpecName: "kube-api-access-xrp8r") pod "dba0c99a-0f14-42bd-8822-ee79fc73ee41" (UID: "dba0c99a-0f14-42bd-8822-ee79fc73ee41"). InnerVolumeSpecName "kube-api-access-xrp8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.395289 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.395586 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.406376 4886 scope.go:117] "RemoveContainer" containerID="27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2" Jan 29 17:10:46 crc kubenswrapper[4886]: E0129 17:10:46.406835 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2\": container with ID starting with 27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2 not found: ID does not exist" containerID="27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.406872 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2"} err="failed to get container status \"27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2\": rpc error: code = NotFound desc = could not find container \"27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2\": container with ID starting with 27931458465a13e72788f87cbc8b654d38049cab2e1e500e5508e4b6b86f09b2 not found: ID does not exist" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.423711 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" (UID: "f0d54f6d-4531-4707-8c1a-aed5e0e36d0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.464290 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.466171 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-config-data" (OuterVolumeSpecName: "config-data") pod "f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" (UID: "f0d54f6d-4531-4707-8c1a-aed5e0e36d0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.479984 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.480622 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.480714 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v52w\" (UniqueName: \"kubernetes.io/projected/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e-kube-api-access-5v52w\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.480821 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrp8r\" (UniqueName: \"kubernetes.io/projected/dba0c99a-0f14-42bd-8822-ee79fc73ee41-kube-api-access-xrp8r\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.707514 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.737130 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.754065 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.775136 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.785860 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: E0129 17:10:46.786448 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" containerName="kube-state-metrics" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.786466 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" containerName="kube-state-metrics" Jan 29 17:10:46 crc kubenswrapper[4886]: E0129 17:10:46.786489 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" containerName="mysqld-exporter" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.786495 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" containerName="mysqld-exporter" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.786725 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" containerName="kube-state-metrics" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.786741 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" containerName="mysqld-exporter" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.787471 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.790059 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.791076 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.799399 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.801855 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.804281 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.804501 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.812631 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.825230 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.889735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.889805 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.889839 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.889983 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.890273 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npsv\" (UniqueName: \"kubernetes.io/projected/fa42ea64-73bc-439c-802c-65ef65a39015-kube-api-access-6npsv\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.890469 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.890506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-config-data\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.890604 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9qd\" (UniqueName: \"kubernetes.io/projected/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-kube-api-access-8x9qd\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.992774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.992836 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.992878 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.992955 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.993057 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6npsv\" (UniqueName: \"kubernetes.io/projected/fa42ea64-73bc-439c-802c-65ef65a39015-kube-api-access-6npsv\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.993108 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.993133 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-config-data\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:46 crc kubenswrapper[4886]: I0129 17:10:46.993176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9qd\" (UniqueName: \"kubernetes.io/projected/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-kube-api-access-8x9qd\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.000426 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.000505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.000680 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa42ea64-73bc-439c-802c-65ef65a39015-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.000677 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-config-data\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.001278 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.001275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.014233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6npsv\" (UniqueName: \"kubernetes.io/projected/fa42ea64-73bc-439c-802c-65ef65a39015-kube-api-access-6npsv\") pod \"kube-state-metrics-0\" (UID: \"fa42ea64-73bc-439c-802c-65ef65a39015\") " pod="openstack/kube-state-metrics-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.018654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9qd\" (UniqueName: \"kubernetes.io/projected/aa7423ef-f68a-4969-a81b-fd2ce4dbc16a-kube-api-access-8x9qd\") pod \"mysqld-exporter-0\" (UID: \"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a\") " pod="openstack/mysqld-exporter-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.106702 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.122885 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.451479 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.505732 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-84jbh"] Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.630099 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 29 17:10:47 crc kubenswrapper[4886]: W0129 17:10:47.705954 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa42ea64_73bc_439c_802c_65ef65a39015.slice/crio-aeb8ad0df2cd782d683a3fd7adf10093560785121b51ab0a6e3cded974fa6ebc WatchSource:0}: Error finding container aeb8ad0df2cd782d683a3fd7adf10093560785121b51ab0a6e3cded974fa6ebc: Status 404 returned error can't find the container with id aeb8ad0df2cd782d683a3fd7adf10093560785121b51ab0a6e3cded974fa6ebc Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.713752 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.745745 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.746086 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-central-agent" containerID="cri-o://c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e" gracePeriod=30 Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.746117 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="proxy-httpd" containerID="cri-o://01c6694fd4df1d797b97e25cbe9f80e6eca4f580fbbf77224f8cc99225251a03" gracePeriod=30 Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.746215 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-notification-agent" containerID="cri-o://af32cb3d4cad94fb3c21ee16283db0307dd6a80318541f4accfe0f6d97cb6b84" gracePeriod=30 Jan 29 17:10:47 crc kubenswrapper[4886]: I0129 17:10:47.746232 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="sg-core" containerID="cri-o://6c975034f363da994f8f028b9f44a46d5e4b43e5df94d066fa0723bd5320a3f5" gracePeriod=30 Jan 29 17:10:48 crc kubenswrapper[4886]: E0129 17:10:48.154562 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51203b48_4909_45b6_8c3a_296fc4ee639c.slice/crio-c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e.scope\": RecentStats: unable to find data in memory cache]" Jan 29 17:10:48 crc kubenswrapper[4886]: E0129 17:10:48.154875 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51203b48_4909_45b6_8c3a_296fc4ee639c.slice/crio-c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e.scope\": RecentStats: unable to find data in memory cache]" Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.397196 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a","Type":"ContainerStarted","Data":"f5ab3a3ee772dea947ca3ed38718c8080a48c601b6be4ec50b20a99fe3b6c247"} Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.405891 4886 generic.go:334] "Generic (PLEG): container finished" podID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerID="01c6694fd4df1d797b97e25cbe9f80e6eca4f580fbbf77224f8cc99225251a03" exitCode=0 Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.405924 4886 generic.go:334] "Generic (PLEG): container finished" podID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerID="6c975034f363da994f8f028b9f44a46d5e4b43e5df94d066fa0723bd5320a3f5" exitCode=2 Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.405953 4886 generic.go:334] "Generic (PLEG): container finished" podID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerID="c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e" exitCode=0 Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.406006 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerDied","Data":"01c6694fd4df1d797b97e25cbe9f80e6eca4f580fbbf77224f8cc99225251a03"} Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.406031 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerDied","Data":"6c975034f363da994f8f028b9f44a46d5e4b43e5df94d066fa0723bd5320a3f5"} Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.406041 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerDied","Data":"c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e"} Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.409756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa42ea64-73bc-439c-802c-65ef65a39015","Type":"ContainerStarted","Data":"aeb8ad0df2cd782d683a3fd7adf10093560785121b51ab0a6e3cded974fa6ebc"} Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.629128 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba0c99a-0f14-42bd-8822-ee79fc73ee41" path="/var/lib/kubelet/pods/dba0c99a-0f14-42bd-8822-ee79fc73ee41/volumes" Jan 29 17:10:48 crc kubenswrapper[4886]: I0129 17:10:48.629765 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d54f6d-4531-4707-8c1a-aed5e0e36d0e" path="/var/lib/kubelet/pods/f0d54f6d-4531-4707-8c1a-aed5e0e36d0e/volumes" Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.425628 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"aa7423ef-f68a-4969-a81b-fd2ce4dbc16a","Type":"ContainerStarted","Data":"1d925b8305416bd0e78aa2573e9ee07015a937abb9a0ce8302b468d57f13c6b7"} Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.428112 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa42ea64-73bc-439c-802c-65ef65a39015","Type":"ContainerStarted","Data":"3ab4717e5b4649ebaf7fb0c6e6ca5e8969a97f1cd9b3dc4edfc0b5ab98c0de4c"} Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.428277 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-84jbh" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="registry-server" containerID="cri-o://1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1" gracePeriod=2 Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.447154 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.782854499 podStartE2EDuration="3.447132742s" podCreationTimestamp="2026-01-29 17:10:46 +0000 UTC" firstStartedPulling="2026-01-29 17:10:47.627980049 +0000 UTC m=+2930.536699321" lastFinishedPulling="2026-01-29 17:10:48.292258292 +0000 UTC m=+2931.200977564" observedRunningTime="2026-01-29 17:10:49.441282674 +0000 UTC m=+2932.350001966" watchObservedRunningTime="2026-01-29 17:10:49.447132742 +0000 UTC m=+2932.355852014" Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.473732 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.25827197 podStartE2EDuration="3.473714362s" podCreationTimestamp="2026-01-29 17:10:46 +0000 UTC" firstStartedPulling="2026-01-29 17:10:47.708191243 +0000 UTC m=+2930.616910535" lastFinishedPulling="2026-01-29 17:10:48.923633645 +0000 UTC m=+2931.832352927" observedRunningTime="2026-01-29 17:10:49.46524009 +0000 UTC m=+2932.373959372" watchObservedRunningTime="2026-01-29 17:10:49.473714362 +0000 UTC m=+2932.382433634" Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.656820 4886 scope.go:117] "RemoveContainer" containerID="6412eac490b1fbd3d0b00a59dd461a3eb98d94b486a8096aadd0a5be64624a01" Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.708467 4886 scope.go:117] "RemoveContainer" containerID="8d073617833fd03b3552145f85acbb902d34a0687d97b69de74b719dca519779" Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.756578 4886 scope.go:117] "RemoveContainer" containerID="6e26b828a472fc3b1df8fa1fda19373a058c84b6a577b9a6475d17f33176e5c8" Jan 29 17:10:49 crc kubenswrapper[4886]: I0129 17:10:49.946242 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.069365 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96q7f\" (UniqueName: \"kubernetes.io/projected/217e65b9-b1b5-4244-930b-b85bc2e0a948-kube-api-access-96q7f\") pod \"217e65b9-b1b5-4244-930b-b85bc2e0a948\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.069669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-catalog-content\") pod \"217e65b9-b1b5-4244-930b-b85bc2e0a948\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.070146 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-utilities\") pod \"217e65b9-b1b5-4244-930b-b85bc2e0a948\" (UID: \"217e65b9-b1b5-4244-930b-b85bc2e0a948\") " Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.070777 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-utilities" (OuterVolumeSpecName: "utilities") pod "217e65b9-b1b5-4244-930b-b85bc2e0a948" (UID: "217e65b9-b1b5-4244-930b-b85bc2e0a948"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.071643 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.078050 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217e65b9-b1b5-4244-930b-b85bc2e0a948-kube-api-access-96q7f" (OuterVolumeSpecName: "kube-api-access-96q7f") pod "217e65b9-b1b5-4244-930b-b85bc2e0a948" (UID: "217e65b9-b1b5-4244-930b-b85bc2e0a948"). InnerVolumeSpecName "kube-api-access-96q7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.121463 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "217e65b9-b1b5-4244-930b-b85bc2e0a948" (UID: "217e65b9-b1b5-4244-930b-b85bc2e0a948"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.175257 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/217e65b9-b1b5-4244-930b-b85bc2e0a948-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.175301 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96q7f\" (UniqueName: \"kubernetes.io/projected/217e65b9-b1b5-4244-930b-b85bc2e0a948-kube-api-access-96q7f\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.444736 4886 generic.go:334] "Generic (PLEG): container finished" podID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerID="1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1" exitCode=0 Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.446266 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84jbh" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.447499 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerDied","Data":"1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1"} Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.447570 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.447589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84jbh" event={"ID":"217e65b9-b1b5-4244-930b-b85bc2e0a948","Type":"ContainerDied","Data":"de22974c151bb71b46851f3f7e77eea61bb7ffc33602315145dbd816afde3589"} Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.447606 4886 scope.go:117] "RemoveContainer" containerID="1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.493550 4886 scope.go:117] "RemoveContainer" containerID="a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.503395 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-84jbh"] Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.516681 4886 scope.go:117] "RemoveContainer" containerID="44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.518801 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-84jbh"] Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.535824 4886 scope.go:117] "RemoveContainer" containerID="1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1" Jan 29 17:10:50 crc kubenswrapper[4886]: E0129 17:10:50.536377 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1\": container with ID starting with 1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1 not found: ID does not exist" containerID="1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.536417 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1"} err="failed to get container status \"1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1\": rpc error: code = NotFound desc = could not find container \"1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1\": container with ID starting with 1cf121c05278d2a79fa62d807a2f7e30e9e3f7f37ffab83863f6b16765571bd1 not found: ID does not exist" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.536442 4886 scope.go:117] "RemoveContainer" containerID="a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146" Jan 29 17:10:50 crc kubenswrapper[4886]: E0129 17:10:50.536693 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146\": container with ID starting with a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146 not found: ID does not exist" containerID="a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.536718 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146"} err="failed to get container status \"a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146\": rpc error: code = NotFound desc = could not find container \"a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146\": container with ID starting with a73252860a50c52042a273920d1fa676ee207346afa4366e940b19fa67393146 not found: ID does not exist" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.536736 4886 scope.go:117] "RemoveContainer" containerID="44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707" Jan 29 17:10:50 crc kubenswrapper[4886]: E0129 17:10:50.536947 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707\": container with ID starting with 44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707 not found: ID does not exist" containerID="44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.536965 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707"} err="failed to get container status \"44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707\": rpc error: code = NotFound desc = could not find container \"44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707\": container with ID starting with 44adaef6a4c07eda3623f3ba09f063d46a2dbeeca14db313ce6dac3eb8544707 not found: ID does not exist" Jan 29 17:10:50 crc kubenswrapper[4886]: I0129 17:10:50.629436 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" path="/var/lib/kubelet/pods/217e65b9-b1b5-4244-930b-b85bc2e0a948/volumes" Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.469188 4886 generic.go:334] "Generic (PLEG): container finished" podID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerID="af32cb3d4cad94fb3c21ee16283db0307dd6a80318541f4accfe0f6d97cb6b84" exitCode=0 Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.469558 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerDied","Data":"af32cb3d4cad94fb3c21ee16283db0307dd6a80318541f4accfe0f6d97cb6b84"} Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.843044 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.982823 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-sg-core-conf-yaml\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.982922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-run-httpd\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.983256 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77p6n\" (UniqueName: \"kubernetes.io/projected/51203b48-4909-45b6-8c3a-296fc4ee639c-kube-api-access-77p6n\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.983395 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-config-data\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.983420 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-scripts\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.983424 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.983496 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-combined-ca-bundle\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.983545 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-log-httpd\") pod \"51203b48-4909-45b6-8c3a-296fc4ee639c\" (UID: \"51203b48-4909-45b6-8c3a-296fc4ee639c\") " Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.984236 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.985395 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.985412 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51203b48-4909-45b6-8c3a-296fc4ee639c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:51 crc kubenswrapper[4886]: I0129 17:10:51.991579 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-scripts" (OuterVolumeSpecName: "scripts") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.003556 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51203b48-4909-45b6-8c3a-296fc4ee639c-kube-api-access-77p6n" (OuterVolumeSpecName: "kube-api-access-77p6n") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "kube-api-access-77p6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.014560 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.082057 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.087005 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77p6n\" (UniqueName: \"kubernetes.io/projected/51203b48-4909-45b6-8c3a-296fc4ee639c-kube-api-access-77p6n\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.087047 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.087062 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.087072 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.109074 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-config-data" (OuterVolumeSpecName: "config-data") pod "51203b48-4909-45b6-8c3a-296fc4ee639c" (UID: "51203b48-4909-45b6-8c3a-296fc4ee639c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.189249 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51203b48-4909-45b6-8c3a-296fc4ee639c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.487171 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51203b48-4909-45b6-8c3a-296fc4ee639c","Type":"ContainerDied","Data":"de5f49918f6704400cdc2de0d7791eff23d5b705cf50d627099de407ae90448b"} Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.487225 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.487232 4886 scope.go:117] "RemoveContainer" containerID="01c6694fd4df1d797b97e25cbe9f80e6eca4f580fbbf77224f8cc99225251a03" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.510543 4886 scope.go:117] "RemoveContainer" containerID="6c975034f363da994f8f028b9f44a46d5e4b43e5df94d066fa0723bd5320a3f5" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.527988 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.564455 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.567217 4886 scope.go:117] "RemoveContainer" containerID="af32cb3d4cad94fb3c21ee16283db0307dd6a80318541f4accfe0f6d97cb6b84" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583064 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583727 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-notification-agent" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583755 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-notification-agent" Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583775 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-central-agent" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583785 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-central-agent" Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583813 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="registry-server" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583822 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="registry-server" Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583836 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="sg-core" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583844 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="sg-core" Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583866 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="extract-utilities" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583875 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="extract-utilities" Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583891 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="proxy-httpd" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583899 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="proxy-httpd" Jan 29 17:10:52 crc kubenswrapper[4886]: E0129 17:10:52.583918 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="extract-content" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.583928 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="extract-content" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.584210 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-notification-agent" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.584235 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="proxy-httpd" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.584257 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="sg-core" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.584267 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="217e65b9-b1b5-4244-930b-b85bc2e0a948" containerName="registry-server" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.584289 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" containerName="ceilometer-central-agent" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.587214 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.590022 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.590355 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.590603 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.599959 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600022 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600070 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600099 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdzk\" (UniqueName: \"kubernetes.io/projected/23f9894b-5940-4f78-9062-719f7e7eca3a-kube-api-access-bqdzk\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600143 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23f9894b-5940-4f78-9062-719f7e7eca3a-log-httpd\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-config-data\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600300 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23f9894b-5940-4f78-9062-719f7e7eca3a-run-httpd\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600387 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-scripts\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.600889 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.613774 4886 scope.go:117] "RemoveContainer" containerID="c9c0e47c6badbee636eb54a74034a0d58d79d9a5f007d41423ec32b132adc41e" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.634016 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51203b48-4909-45b6-8c3a-296fc4ee639c" path="/var/lib/kubelet/pods/51203b48-4909-45b6-8c3a-296fc4ee639c/volumes" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdzk\" (UniqueName: \"kubernetes.io/projected/23f9894b-5940-4f78-9062-719f7e7eca3a-kube-api-access-bqdzk\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23f9894b-5940-4f78-9062-719f7e7eca3a-log-httpd\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703529 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-config-data\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703659 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23f9894b-5940-4f78-9062-719f7e7eca3a-run-httpd\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-scripts\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703840 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.703913 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.704763 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23f9894b-5940-4f78-9062-719f7e7eca3a-run-httpd\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.704962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23f9894b-5940-4f78-9062-719f7e7eca3a-log-httpd\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.708959 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.709088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-scripts\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.710361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-config-data\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.710366 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.711156 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f9894b-5940-4f78-9062-719f7e7eca3a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.722641 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdzk\" (UniqueName: \"kubernetes.io/projected/23f9894b-5940-4f78-9062-719f7e7eca3a-kube-api-access-bqdzk\") pod \"ceilometer-0\" (UID: \"23f9894b-5940-4f78-9062-719f7e7eca3a\") " pod="openstack/ceilometer-0" Jan 29 17:10:52 crc kubenswrapper[4886]: I0129 17:10:52.929606 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:10:53 crc kubenswrapper[4886]: I0129 17:10:53.478170 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:10:53 crc kubenswrapper[4886]: I0129 17:10:53.527394 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23f9894b-5940-4f78-9062-719f7e7eca3a","Type":"ContainerStarted","Data":"d0da7ef4ede1584d49bc9408cce25318c17924d82696643df0b1e3e96c3c34f0"} Jan 29 17:10:54 crc kubenswrapper[4886]: I0129 17:10:54.539775 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23f9894b-5940-4f78-9062-719f7e7eca3a","Type":"ContainerStarted","Data":"843c319a528bddd4c44aba6cc0736758be4c6e9ea9c94b4e1040657ccc80e6c7"} Jan 29 17:10:55 crc kubenswrapper[4886]: I0129 17:10:55.553353 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23f9894b-5940-4f78-9062-719f7e7eca3a","Type":"ContainerStarted","Data":"ddb246caed2a5503ac0be66ecd7978cb4002333cea945243173364e30caf063f"} Jan 29 17:10:56 crc kubenswrapper[4886]: I0129 17:10:56.566615 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23f9894b-5940-4f78-9062-719f7e7eca3a","Type":"ContainerStarted","Data":"aed5c9470747d60c829ea4caec4d37a15f4fec4d356c00fe4e8b2a5f3977bd48"} Jan 29 17:10:57 crc kubenswrapper[4886]: I0129 17:10:57.131588 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 17:10:58 crc kubenswrapper[4886]: I0129 17:10:58.586970 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23f9894b-5940-4f78-9062-719f7e7eca3a","Type":"ContainerStarted","Data":"0e9e000088def39e8cd6869d2bf6cee480a2e648f4614f59664f4bcc0b5c282e"} Jan 29 17:10:58 crc kubenswrapper[4886]: I0129 17:10:58.587435 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:10:58 crc kubenswrapper[4886]: I0129 17:10:58.626296 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.358991151 podStartE2EDuration="6.626238711s" podCreationTimestamp="2026-01-29 17:10:52 +0000 UTC" firstStartedPulling="2026-01-29 17:10:53.481473978 +0000 UTC m=+2936.390193250" lastFinishedPulling="2026-01-29 17:10:57.748721538 +0000 UTC m=+2940.657440810" observedRunningTime="2026-01-29 17:10:58.614011422 +0000 UTC m=+2941.522730714" watchObservedRunningTime="2026-01-29 17:10:58.626238711 +0000 UTC m=+2941.534957983" Jan 29 17:11:22 crc kubenswrapper[4886]: I0129 17:11:22.966602 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 17:11:29 crc kubenswrapper[4886]: I0129 17:11:29.661669 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:11:29 crc kubenswrapper[4886]: I0129 17:11:29.662230 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:11:59 crc kubenswrapper[4886]: I0129 17:11:59.661770 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:11:59 crc kubenswrapper[4886]: I0129 17:11:59.662406 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:12:29 crc kubenswrapper[4886]: I0129 17:12:29.660893 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:12:29 crc kubenswrapper[4886]: I0129 17:12:29.661554 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:12:29 crc kubenswrapper[4886]: I0129 17:12:29.661617 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:12:29 crc kubenswrapper[4886]: I0129 17:12:29.662307 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:12:29 crc kubenswrapper[4886]: I0129 17:12:29.662404 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" gracePeriod=600 Jan 29 17:12:29 crc kubenswrapper[4886]: E0129 17:12:29.785481 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:12:30 crc kubenswrapper[4886]: I0129 17:12:30.721759 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" exitCode=0 Jan 29 17:12:30 crc kubenswrapper[4886]: I0129 17:12:30.721812 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d"} Jan 29 17:12:30 crc kubenswrapper[4886]: I0129 17:12:30.722116 4886 scope.go:117] "RemoveContainer" containerID="db3893b2fd9096a13f5744612d4a2bcbba80c7ed2ddb6ffa1307348c351b1963" Jan 29 17:12:30 crc kubenswrapper[4886]: I0129 17:12:30.723076 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:12:30 crc kubenswrapper[4886]: E0129 17:12:30.723604 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:12:44 crc kubenswrapper[4886]: I0129 17:12:44.615564 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:12:44 crc kubenswrapper[4886]: E0129 17:12:44.616287 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:12:50 crc kubenswrapper[4886]: I0129 17:12:50.638560 4886 scope.go:117] "RemoveContainer" containerID="33ad2a1126eff6cbb88ccc77df323fa1e654c5d2155c0985168da0fd53e1864a" Jan 29 17:12:50 crc kubenswrapper[4886]: I0129 17:12:50.698247 4886 scope.go:117] "RemoveContainer" containerID="fb8fc548f591be6e16630c1c9171e7ca1c4549f03107635ab3d54cf848daec39" Jan 29 17:12:50 crc kubenswrapper[4886]: I0129 17:12:50.727931 4886 scope.go:117] "RemoveContainer" containerID="95a7d3b8a9e32ae8ae2e3ef610040f7131916bc7de34db8cc1af0fec9c3ef960" Jan 29 17:12:50 crc kubenswrapper[4886]: I0129 17:12:50.749502 4886 scope.go:117] "RemoveContainer" containerID="9b68510df598b451ff2d4faad4a0af1636831487ecf72ad66ce874c635cd8d9e" Jan 29 17:12:58 crc kubenswrapper[4886]: I0129 17:12:58.623723 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:12:58 crc kubenswrapper[4886]: E0129 17:12:58.624445 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:13:12 crc kubenswrapper[4886]: I0129 17:13:12.616310 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:13:12 crc kubenswrapper[4886]: E0129 17:13:12.618861 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:13:25 crc kubenswrapper[4886]: I0129 17:13:25.615685 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:13:25 crc kubenswrapper[4886]: E0129 17:13:25.616907 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:13:38 crc kubenswrapper[4886]: I0129 17:13:38.622790 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:13:38 crc kubenswrapper[4886]: E0129 17:13:38.624137 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:13:51 crc kubenswrapper[4886]: I0129 17:13:51.094116 4886 scope.go:117] "RemoveContainer" containerID="bfb4e65e7631317b75e0b15c39b90031add550dcb40292d0be47c6410cfdc89e" Jan 29 17:13:51 crc kubenswrapper[4886]: I0129 17:13:51.122797 4886 scope.go:117] "RemoveContainer" containerID="2012816a934b66e60ffd90c59e1fa261b396b239468adba78a0dedfe4395c1be" Jan 29 17:13:51 crc kubenswrapper[4886]: I0129 17:13:51.156071 4886 scope.go:117] "RemoveContainer" containerID="794f8e0bf261a512c459ecf62c8c7c26bca5d60128a7b4f23734cabe8f7c898d" Jan 29 17:13:53 crc kubenswrapper[4886]: I0129 17:13:53.615244 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:13:53 crc kubenswrapper[4886]: E0129 17:13:53.616383 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:14:08 crc kubenswrapper[4886]: I0129 17:14:08.625130 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:14:08 crc kubenswrapper[4886]: E0129 17:14:08.626189 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:14:23 crc kubenswrapper[4886]: I0129 17:14:23.615027 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:14:23 crc kubenswrapper[4886]: E0129 17:14:23.615830 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:14:35 crc kubenswrapper[4886]: I0129 17:14:35.615786 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:14:35 crc kubenswrapper[4886]: E0129 17:14:35.618034 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:14:49 crc kubenswrapper[4886]: I0129 17:14:49.616455 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:14:49 crc kubenswrapper[4886]: E0129 17:14:49.617508 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:14:51 crc kubenswrapper[4886]: I0129 17:14:51.292022 4886 scope.go:117] "RemoveContainer" containerID="afb5da406ee3b16e59af7913d87b7d9742dbcfd595f22b00884d57064f6bdef1" Jan 29 17:14:51 crc kubenswrapper[4886]: I0129 17:14:51.329539 4886 scope.go:117] "RemoveContainer" containerID="62df5b8b647bd7eae2ddeb32c6165e5fc8cdbdb8c984d6b948088525b813e903" Jan 29 17:14:51 crc kubenswrapper[4886]: I0129 17:14:51.389928 4886 scope.go:117] "RemoveContainer" containerID="be55140e95fb2c7fd3a46b1ece79fa3d9132da294caa5ac8edf498151a8ce0b2" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.074089 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-sgspp"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.086869 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-5ab6-account-create-update-4xrnn"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.100097 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f0b5-account-create-update-8b8vz"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.118756 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-sgspp"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.129320 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4vq4n"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.139125 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-00e3-account-create-update-5hhsj"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.149288 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d860-account-create-update-5kd66"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.159910 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mdvpb"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.172655 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-5ab6-account-create-update-4xrnn"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.182751 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d860-account-create-update-5kd66"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.191955 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-00e3-account-create-update-5hhsj"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.201686 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f0b5-account-create-update-8b8vz"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.212221 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mdvpb"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.226614 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4vq4n"] Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.626889 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29921ec8-f68f-4547-a2c0-d4d3f5de6960" path="/var/lib/kubelet/pods/29921ec8-f68f-4547-a2c0-d4d3f5de6960/volumes" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.627595 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c16915-30cc-4a4f-81ff-4b82cf152968" path="/var/lib/kubelet/pods/66c16915-30cc-4a4f-81ff-4b82cf152968/volumes" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.628245 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d" path="/var/lib/kubelet/pods/6bcdded9-ad2a-4fcc-82f1-0a13cf85b06d/volumes" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.630284 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c996a30-f53d-49f1-a7d1-2ca23704b48e" path="/var/lib/kubelet/pods/7c996a30-f53d-49f1-a7d1-2ca23704b48e/volumes" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.631751 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe" path="/var/lib/kubelet/pods/9c4e1c71-a857-4feb-8778-ba3aa8b7dbfe/volumes" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.632565 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa302a57-5c6b-41b1-ac4b-7d9095b7b65a" path="/var/lib/kubelet/pods/aa302a57-5c6b-41b1-ac4b-7d9095b7b65a/volumes" Jan 29 17:14:56 crc kubenswrapper[4886]: I0129 17:14:56.633181 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b696cd6b-840b-4505-9010-114d223a90e9" path="/var/lib/kubelet/pods/b696cd6b-840b-4505-9010-114d223a90e9/volumes" Jan 29 17:14:57 crc kubenswrapper[4886]: I0129 17:14:57.030535 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fw887"] Jan 29 17:14:57 crc kubenswrapper[4886]: I0129 17:14:57.040363 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fw887"] Jan 29 17:14:58 crc kubenswrapper[4886]: I0129 17:14:58.629186 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6479af73-81ef-4755-89b5-3a2dd44e99b3" path="/var/lib/kubelet/pods/6479af73-81ef-4755-89b5-3a2dd44e99b3/volumes" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.154307 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz"] Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.156342 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.158535 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.160291 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.174395 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz"] Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.273561 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hj4n\" (UniqueName: \"kubernetes.io/projected/875b9b50-c440-4567-b475-c890d3d5d713-kube-api-access-4hj4n\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.274677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875b9b50-c440-4567-b475-c890d3d5d713-config-volume\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.274855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/875b9b50-c440-4567-b475-c890d3d5d713-secret-volume\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.377320 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hj4n\" (UniqueName: \"kubernetes.io/projected/875b9b50-c440-4567-b475-c890d3d5d713-kube-api-access-4hj4n\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.377468 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875b9b50-c440-4567-b475-c890d3d5d713-config-volume\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.377547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/875b9b50-c440-4567-b475-c890d3d5d713-secret-volume\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.378416 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875b9b50-c440-4567-b475-c890d3d5d713-config-volume\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.382792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/875b9b50-c440-4567-b475-c890d3d5d713-secret-volume\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.392500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hj4n\" (UniqueName: \"kubernetes.io/projected/875b9b50-c440-4567-b475-c890d3d5d713-kube-api-access-4hj4n\") pod \"collect-profiles-29495115-pkxcz\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.477743 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:00 crc kubenswrapper[4886]: I0129 17:15:00.956903 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz"] Jan 29 17:15:01 crc kubenswrapper[4886]: I0129 17:15:01.408245 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" event={"ID":"875b9b50-c440-4567-b475-c890d3d5d713","Type":"ContainerStarted","Data":"db3e3f16f0932c632a2ab1ffff0f92252979a66c9e52244934f9d97bdd89246b"} Jan 29 17:15:01 crc kubenswrapper[4886]: I0129 17:15:01.408589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" event={"ID":"875b9b50-c440-4567-b475-c890d3d5d713","Type":"ContainerStarted","Data":"e7264abfdb40ca1553c323e488eb75e4e7925d55c85c24f0028a060cfbb82eff"} Jan 29 17:15:01 crc kubenswrapper[4886]: I0129 17:15:01.432950 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" podStartSLOduration=1.432912639 podStartE2EDuration="1.432912639s" podCreationTimestamp="2026-01-29 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:15:01.423655886 +0000 UTC m=+3184.332375208" watchObservedRunningTime="2026-01-29 17:15:01.432912639 +0000 UTC m=+3184.341631911" Jan 29 17:15:02 crc kubenswrapper[4886]: I0129 17:15:02.420779 4886 generic.go:334] "Generic (PLEG): container finished" podID="875b9b50-c440-4567-b475-c890d3d5d713" containerID="db3e3f16f0932c632a2ab1ffff0f92252979a66c9e52244934f9d97bdd89246b" exitCode=0 Jan 29 17:15:02 crc kubenswrapper[4886]: I0129 17:15:02.420999 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" event={"ID":"875b9b50-c440-4567-b475-c890d3d5d713","Type":"ContainerDied","Data":"db3e3f16f0932c632a2ab1ffff0f92252979a66c9e52244934f9d97bdd89246b"} Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.847118 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.962516 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875b9b50-c440-4567-b475-c890d3d5d713-config-volume\") pod \"875b9b50-c440-4567-b475-c890d3d5d713\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.962628 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/875b9b50-c440-4567-b475-c890d3d5d713-secret-volume\") pod \"875b9b50-c440-4567-b475-c890d3d5d713\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.962970 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hj4n\" (UniqueName: \"kubernetes.io/projected/875b9b50-c440-4567-b475-c890d3d5d713-kube-api-access-4hj4n\") pod \"875b9b50-c440-4567-b475-c890d3d5d713\" (UID: \"875b9b50-c440-4567-b475-c890d3d5d713\") " Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.963428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875b9b50-c440-4567-b475-c890d3d5d713-config-volume" (OuterVolumeSpecName: "config-volume") pod "875b9b50-c440-4567-b475-c890d3d5d713" (UID: "875b9b50-c440-4567-b475-c890d3d5d713"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.963643 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875b9b50-c440-4567-b475-c890d3d5d713-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.969586 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875b9b50-c440-4567-b475-c890d3d5d713-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "875b9b50-c440-4567-b475-c890d3d5d713" (UID: "875b9b50-c440-4567-b475-c890d3d5d713"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4886]: I0129 17:15:03.969716 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875b9b50-c440-4567-b475-c890d3d5d713-kube-api-access-4hj4n" (OuterVolumeSpecName: "kube-api-access-4hj4n") pod "875b9b50-c440-4567-b475-c890d3d5d713" (UID: "875b9b50-c440-4567-b475-c890d3d5d713"). InnerVolumeSpecName "kube-api-access-4hj4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.055236 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xg8wq"] Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.066005 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/875b9b50-c440-4567-b475-c890d3d5d713-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.066049 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hj4n\" (UniqueName: \"kubernetes.io/projected/875b9b50-c440-4567-b475-c890d3d5d713-kube-api-access-4hj4n\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.067699 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xg8wq"] Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.444224 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" event={"ID":"875b9b50-c440-4567-b475-c890d3d5d713","Type":"ContainerDied","Data":"e7264abfdb40ca1553c323e488eb75e4e7925d55c85c24f0028a060cfbb82eff"} Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.444264 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7264abfdb40ca1553c323e488eb75e4e7925d55c85c24f0028a060cfbb82eff" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.444300 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.494263 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9"] Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.504577 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xnbx9"] Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.616850 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:15:04 crc kubenswrapper[4886]: E0129 17:15:04.617475 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.644765 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18290a86-b94a-42c5-9f50-1614077f881b" path="/var/lib/kubelet/pods/18290a86-b94a-42c5-9f50-1614077f881b/volumes" Jan 29 17:15:04 crc kubenswrapper[4886]: I0129 17:15:04.650547 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b94c98-0561-4135-a5af-023ef5f4ad67" path="/var/lib/kubelet/pods/40b94c98-0561-4135-a5af-023ef5f4ad67/volumes" Jan 29 17:15:06 crc kubenswrapper[4886]: I0129 17:15:06.028875 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-23ad-account-create-update-2dsmj"] Jan 29 17:15:06 crc kubenswrapper[4886]: I0129 17:15:06.040098 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4"] Jan 29 17:15:06 crc kubenswrapper[4886]: I0129 17:15:06.053997 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-23ad-account-create-update-2dsmj"] Jan 29 17:15:06 crc kubenswrapper[4886]: I0129 17:15:06.066714 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-sl5h4"] Jan 29 17:15:06 crc kubenswrapper[4886]: I0129 17:15:06.628933 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ed1f90-1318-483e-901c-bff80e1e94b6" path="/var/lib/kubelet/pods/d2ed1f90-1318-483e-901c-bff80e1e94b6/volumes" Jan 29 17:15:06 crc kubenswrapper[4886]: I0129 17:15:06.629831 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a69a79-4e4c-4815-8cf5-0864ff2b8026" path="/var/lib/kubelet/pods/d8a69a79-4e4c-4815-8cf5-0864ff2b8026/volumes" Jan 29 17:15:17 crc kubenswrapper[4886]: I0129 17:15:17.618293 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:15:17 crc kubenswrapper[4886]: E0129 17:15:17.619953 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:15:32 crc kubenswrapper[4886]: I0129 17:15:32.615667 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:15:32 crc kubenswrapper[4886]: E0129 17:15:32.624911 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:15:43 crc kubenswrapper[4886]: I0129 17:15:43.615757 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:15:43 crc kubenswrapper[4886]: E0129 17:15:43.616518 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.070648 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-b8qfq"] Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.107353 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vvrp4"] Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.118576 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5m27f"] Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.128638 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vvrp4"] Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.139566 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-b8qfq"] Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.150032 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5m27f"] Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.627702 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219e979e-b3a8-42d0-8f23-737a86a2aefb" path="/var/lib/kubelet/pods/219e979e-b3a8-42d0-8f23-737a86a2aefb/volumes" Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.628792 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61eedb40-ed14-42aa-9751-8bedcd699260" path="/var/lib/kubelet/pods/61eedb40-ed14-42aa-9751-8bedcd699260/volumes" Jan 29 17:15:46 crc kubenswrapper[4886]: I0129 17:15:46.629719 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca25333-29b2-4c38-9e85-ebd2a0d593d6" path="/var/lib/kubelet/pods/eca25333-29b2-4c38-9e85-ebd2a0d593d6/volumes" Jan 29 17:15:49 crc kubenswrapper[4886]: I0129 17:15:49.027748 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mj8rv"] Jan 29 17:15:49 crc kubenswrapper[4886]: I0129 17:15:49.041502 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mj8rv"] Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.420713 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bv9pm"] Jan 29 17:15:50 crc kubenswrapper[4886]: E0129 17:15:50.421718 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875b9b50-c440-4567-b475-c890d3d5d713" containerName="collect-profiles" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.421737 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="875b9b50-c440-4567-b475-c890d3d5d713" containerName="collect-profiles" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.422052 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="875b9b50-c440-4567-b475-c890d3d5d713" containerName="collect-profiles" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.424458 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.432393 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bv9pm"] Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.536807 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-catalog-content\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.537080 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-utilities\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.537123 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rq4p\" (UniqueName: \"kubernetes.io/projected/1f773961-b526-4457-870c-ac299a3e3312-kube-api-access-6rq4p\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.639603 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34bb765-0998-45ea-bb61-9fbbc2c7359d" path="/var/lib/kubelet/pods/f34bb765-0998-45ea-bb61-9fbbc2c7359d/volumes" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.640201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-utilities\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.640261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rq4p\" (UniqueName: \"kubernetes.io/projected/1f773961-b526-4457-870c-ac299a3e3312-kube-api-access-6rq4p\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.640402 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-catalog-content\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.640867 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-utilities\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.640875 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-catalog-content\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.662772 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rq4p\" (UniqueName: \"kubernetes.io/projected/1f773961-b526-4457-870c-ac299a3e3312-kube-api-access-6rq4p\") pod \"redhat-operators-bv9pm\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:50 crc kubenswrapper[4886]: I0129 17:15:50.746869 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.510859 4886 scope.go:117] "RemoveContainer" containerID="78746abbdca4d80f0a57707d5af0310c508403ee469b611bd3861cf01570354a" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.511409 4886 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2jzzb container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded" start-of-body= Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.511456 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2jzzb" podUID="befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.581666 4886 scope.go:117] "RemoveContainer" containerID="ce7bb70d8d66605a00b65db196f138b8d093db85ba2aba770dcd073411b5b8b4" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.636015 4886 scope.go:117] "RemoveContainer" containerID="2706075df7ed398bfa86a5019c0c0b891534965545aed4044f6858df83babfa9" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.713830 4886 scope.go:117] "RemoveContainer" containerID="bb6b6c4443538f6a82366349284b39cf96fcba5ff7da991fc88f83ec4dbea3cd" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.785076 4886 scope.go:117] "RemoveContainer" containerID="3a64bd79066ba13789ce6be118a26c29652e1e5c788ad39a1b41f13dad0dd1c1" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.814959 4886 scope.go:117] "RemoveContainer" containerID="20030a467bab27996b15106f17b7491349b629c6d6de493fc3b1efb1f226e72c" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.847676 4886 scope.go:117] "RemoveContainer" containerID="9211a739518fb120e2bda32757d910dcbc67d03a2ddbfea02f5bc9964d2f0a2d" Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.894350 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bv9pm"] Jan 29 17:15:51 crc kubenswrapper[4886]: I0129 17:15:51.920693 4886 scope.go:117] "RemoveContainer" containerID="2e89a5a701ca89a4fedcbc0c8d956d6d340377591f80cf75f3cdedc6fb2cd6f3" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.004215 4886 scope.go:117] "RemoveContainer" containerID="5f38a23b3e231c3670461bd30eb72fab48714dac00ff0dbd8042edb99ce295c4" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.058768 4886 scope.go:117] "RemoveContainer" containerID="c217cd04d2dba654b23c94e4b5b9acb5912a4546fafe4781e26a2d0d53058004" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.097136 4886 scope.go:117] "RemoveContainer" containerID="5019558a9253bbef2f27d289d48dcc75d2b0f7a1469d88aa8fb186da0d61df99" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.139461 4886 scope.go:117] "RemoveContainer" containerID="cbbd4f5360c0e0e269db9be0e3b0c9d872ff0fa28897b05c76dba7a51c4b1e4c" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.165583 4886 scope.go:117] "RemoveContainer" containerID="fbecb6255a3f2d33607adb71963134e7eb4f057014a12ad026702a5429304db4" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.218215 4886 scope.go:117] "RemoveContainer" containerID="dae301d02f31a6be0962a543705953e6d92f427e7aa9bc8443d7688a4f7705a4" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.250162 4886 scope.go:117] "RemoveContainer" containerID="ef7ef7e1c633f815512fbc83adaa9bb46d23ddf73eb8c93c02d1c3c3b64a5fcf" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.280828 4886 scope.go:117] "RemoveContainer" containerID="11300dda6841f3bcadbf8fc0b293c71f220072872935dad2eeec46ba483d2773" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.324888 4886 scope.go:117] "RemoveContainer" containerID="d34996a936f771ac75eec769fb4795e0b3637c5867ba052c3b34c2c7b2aee667" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.354909 4886 scope.go:117] "RemoveContainer" containerID="0341a2566f1bb6385e4ca19bd7599e154fd2818c69290a143a8dae194ef6f346" Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.563270 4886 generic.go:334] "Generic (PLEG): container finished" podID="1f773961-b526-4457-870c-ac299a3e3312" containerID="398572a57dafba1fd44cb5ba23bcfc932f80aa853a274d63caeb2dea379597de" exitCode=0 Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.563373 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerDied","Data":"398572a57dafba1fd44cb5ba23bcfc932f80aa853a274d63caeb2dea379597de"} Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.563422 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerStarted","Data":"1cee306e378c6e1adda858d2bbd9e36da757769a33d53cb9a3ec25090fcac3dd"} Jan 29 17:15:52 crc kubenswrapper[4886]: I0129 17:15:52.569483 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:15:54 crc kubenswrapper[4886]: I0129 17:15:54.594735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerStarted","Data":"b9e457fec0b46000ce1469c5ea146165937abd84d20dd0308ec4a5fc11ab5a73"} Jan 29 17:15:56 crc kubenswrapper[4886]: I0129 17:15:56.616684 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:15:56 crc kubenswrapper[4886]: E0129 17:15:56.617599 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:15:58 crc kubenswrapper[4886]: I0129 17:15:58.035692 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-bd38-account-create-update-rgmr5"] Jan 29 17:15:58 crc kubenswrapper[4886]: I0129 17:15:58.046842 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-bd38-account-create-update-rgmr5"] Jan 29 17:15:58 crc kubenswrapper[4886]: I0129 17:15:58.648321 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31fe7aa-0ad1-44ef-a748-b4f366a4d374" path="/var/lib/kubelet/pods/c31fe7aa-0ad1-44ef-a748-b4f366a4d374/volumes" Jan 29 17:15:59 crc kubenswrapper[4886]: I0129 17:15:59.662873 4886 generic.go:334] "Generic (PLEG): container finished" podID="1f773961-b526-4457-870c-ac299a3e3312" containerID="b9e457fec0b46000ce1469c5ea146165937abd84d20dd0308ec4a5fc11ab5a73" exitCode=0 Jan 29 17:15:59 crc kubenswrapper[4886]: I0129 17:15:59.662914 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerDied","Data":"b9e457fec0b46000ce1469c5ea146165937abd84d20dd0308ec4a5fc11ab5a73"} Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.033177 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-4501-account-create-update-hj72z"] Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.045214 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-70c1-account-create-update-gwzzv"] Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.055569 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-4501-account-create-update-hj72z"] Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.065129 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e433-account-create-update-qm5sx"] Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.074156 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e433-account-create-update-qm5sx"] Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.085247 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-70c1-account-create-update-gwzzv"] Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.627131 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3dc785-5f55-49ca-8678-5105ba7e0568" path="/var/lib/kubelet/pods/2b3dc785-5f55-49ca-8678-5105ba7e0568/volumes" Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.627930 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95df3f15-8d1d-4baf-bbb6-df4939f0d201" path="/var/lib/kubelet/pods/95df3f15-8d1d-4baf-bbb6-df4939f0d201/volumes" Jan 29 17:16:00 crc kubenswrapper[4886]: I0129 17:16:00.628644 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e697ee-193d-4ce1-9905-cebf2e6ba7ff" path="/var/lib/kubelet/pods/b8e697ee-193d-4ce1-9905-cebf2e6ba7ff/volumes" Jan 29 17:16:01 crc kubenswrapper[4886]: I0129 17:16:01.685152 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerStarted","Data":"1a0aa82fdb2a0a8ce345b81c0f3dabeccc7dfaf4d0119db5450b96bd81c1f459"} Jan 29 17:16:01 crc kubenswrapper[4886]: I0129 17:16:01.710693 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bv9pm" podStartSLOduration=3.374734529 podStartE2EDuration="11.71067789s" podCreationTimestamp="2026-01-29 17:15:50 +0000 UTC" firstStartedPulling="2026-01-29 17:15:52.569166298 +0000 UTC m=+3235.477885570" lastFinishedPulling="2026-01-29 17:16:00.905109659 +0000 UTC m=+3243.813828931" observedRunningTime="2026-01-29 17:16:01.707107379 +0000 UTC m=+3244.615826681" watchObservedRunningTime="2026-01-29 17:16:01.71067789 +0000 UTC m=+3244.619397162" Jan 29 17:16:03 crc kubenswrapper[4886]: I0129 17:16:03.035401 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8whvl"] Jan 29 17:16:03 crc kubenswrapper[4886]: I0129 17:16:03.048019 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8whvl"] Jan 29 17:16:04 crc kubenswrapper[4886]: I0129 17:16:04.629844 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9729b7-e21b-4509-b337-618094fb2d52" path="/var/lib/kubelet/pods/6c9729b7-e21b-4509-b337-618094fb2d52/volumes" Jan 29 17:16:07 crc kubenswrapper[4886]: I0129 17:16:07.616402 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:16:07 crc kubenswrapper[4886]: E0129 17:16:07.617495 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:16:10 crc kubenswrapper[4886]: I0129 17:16:10.748004 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:16:10 crc kubenswrapper[4886]: I0129 17:16:10.748421 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:16:11 crc kubenswrapper[4886]: I0129 17:16:11.804879 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bv9pm" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="registry-server" probeResult="failure" output=< Jan 29 17:16:11 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:16:11 crc kubenswrapper[4886]: > Jan 29 17:16:14 crc kubenswrapper[4886]: I0129 17:16:14.044132 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-thqn5"] Jan 29 17:16:14 crc kubenswrapper[4886]: I0129 17:16:14.053607 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-thqn5"] Jan 29 17:16:14 crc kubenswrapper[4886]: I0129 17:16:14.629030 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f114908-5594-4378-939f-f54b2157d676" path="/var/lib/kubelet/pods/9f114908-5594-4378-939f-f54b2157d676/volumes" Jan 29 17:16:20 crc kubenswrapper[4886]: I0129 17:16:20.617797 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:16:20 crc kubenswrapper[4886]: E0129 17:16:20.618961 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:16:21 crc kubenswrapper[4886]: I0129 17:16:21.825510 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bv9pm" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="registry-server" probeResult="failure" output=< Jan 29 17:16:21 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:16:21 crc kubenswrapper[4886]: > Jan 29 17:16:30 crc kubenswrapper[4886]: I0129 17:16:30.802866 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:16:30 crc kubenswrapper[4886]: I0129 17:16:30.851375 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:16:33 crc kubenswrapper[4886]: I0129 17:16:33.615458 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:16:33 crc kubenswrapper[4886]: E0129 17:16:33.616274 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:16:34 crc kubenswrapper[4886]: I0129 17:16:34.418013 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bv9pm"] Jan 29 17:16:34 crc kubenswrapper[4886]: I0129 17:16:34.418274 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bv9pm" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="registry-server" containerID="cri-o://1a0aa82fdb2a0a8ce345b81c0f3dabeccc7dfaf4d0119db5450b96bd81c1f459" gracePeriod=2 Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.026293 4886 generic.go:334] "Generic (PLEG): container finished" podID="1f773961-b526-4457-870c-ac299a3e3312" containerID="1a0aa82fdb2a0a8ce345b81c0f3dabeccc7dfaf4d0119db5450b96bd81c1f459" exitCode=0 Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.026405 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerDied","Data":"1a0aa82fdb2a0a8ce345b81c0f3dabeccc7dfaf4d0119db5450b96bd81c1f459"} Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.668099 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.850340 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rq4p\" (UniqueName: \"kubernetes.io/projected/1f773961-b526-4457-870c-ac299a3e3312-kube-api-access-6rq4p\") pod \"1f773961-b526-4457-870c-ac299a3e3312\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.850544 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-utilities\") pod \"1f773961-b526-4457-870c-ac299a3e3312\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.850569 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-catalog-content\") pod \"1f773961-b526-4457-870c-ac299a3e3312\" (UID: \"1f773961-b526-4457-870c-ac299a3e3312\") " Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.851450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-utilities" (OuterVolumeSpecName: "utilities") pod "1f773961-b526-4457-870c-ac299a3e3312" (UID: "1f773961-b526-4457-870c-ac299a3e3312"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.857473 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f773961-b526-4457-870c-ac299a3e3312-kube-api-access-6rq4p" (OuterVolumeSpecName: "kube-api-access-6rq4p") pod "1f773961-b526-4457-870c-ac299a3e3312" (UID: "1f773961-b526-4457-870c-ac299a3e3312"). InnerVolumeSpecName "kube-api-access-6rq4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.953446 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.953477 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rq4p\" (UniqueName: \"kubernetes.io/projected/1f773961-b526-4457-870c-ac299a3e3312-kube-api-access-6rq4p\") on node \"crc\" DevicePath \"\"" Jan 29 17:16:35 crc kubenswrapper[4886]: I0129 17:16:35.963691 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f773961-b526-4457-870c-ac299a3e3312" (UID: "1f773961-b526-4457-870c-ac299a3e3312"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.040953 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv9pm" event={"ID":"1f773961-b526-4457-870c-ac299a3e3312","Type":"ContainerDied","Data":"1cee306e378c6e1adda858d2bbd9e36da757769a33d53cb9a3ec25090fcac3dd"} Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.041021 4886 scope.go:117] "RemoveContainer" containerID="1a0aa82fdb2a0a8ce345b81c0f3dabeccc7dfaf4d0119db5450b96bd81c1f459" Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.042503 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv9pm" Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.056347 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f773961-b526-4457-870c-ac299a3e3312-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.090758 4886 scope.go:117] "RemoveContainer" containerID="b9e457fec0b46000ce1469c5ea146165937abd84d20dd0308ec4a5fc11ab5a73" Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.097390 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bv9pm"] Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.108440 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bv9pm"] Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.122575 4886 scope.go:117] "RemoveContainer" containerID="398572a57dafba1fd44cb5ba23bcfc932f80aa853a274d63caeb2dea379597de" Jan 29 17:16:36 crc kubenswrapper[4886]: I0129 17:16:36.639718 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f773961-b526-4457-870c-ac299a3e3312" path="/var/lib/kubelet/pods/1f773961-b526-4457-870c-ac299a3e3312/volumes" Jan 29 17:16:48 crc kubenswrapper[4886]: I0129 17:16:48.629292 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:16:48 crc kubenswrapper[4886]: E0129 17:16:48.630155 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:16:52 crc kubenswrapper[4886]: I0129 17:16:52.819674 4886 scope.go:117] "RemoveContainer" containerID="c6fd592bb372f4bd56073a5709a8ef40ff848343cbd26b66d1e162d12eab6737" Jan 29 17:16:52 crc kubenswrapper[4886]: I0129 17:16:52.859501 4886 scope.go:117] "RemoveContainer" containerID="5279babaff011b0a7c0724784680ba960a9fce4465f977efe275f3b290d89fab" Jan 29 17:16:52 crc kubenswrapper[4886]: I0129 17:16:52.908817 4886 scope.go:117] "RemoveContainer" containerID="297512a17905e8884ba2dee2e1bd0e97f5fbde7e67ab2e041189401e3a8b1069" Jan 29 17:16:52 crc kubenswrapper[4886]: I0129 17:16:52.930582 4886 scope.go:117] "RemoveContainer" containerID="76e9fd9551f88713599d793f819bec47fc38185510d47fbd152e0939943ac037" Jan 29 17:16:52 crc kubenswrapper[4886]: I0129 17:16:52.994269 4886 scope.go:117] "RemoveContainer" containerID="1b2a63dcfed7450a36197cbdc154c29e365ef6be50e63a79bd321d9e35afd21f" Jan 29 17:16:53 crc kubenswrapper[4886]: I0129 17:16:53.032677 4886 scope.go:117] "RemoveContainer" containerID="c0779e333572b6cd2f4e3dc26dcb63d1cb95b806d59884314b143132c6990518" Jan 29 17:16:53 crc kubenswrapper[4886]: I0129 17:16:53.095959 4886 scope.go:117] "RemoveContainer" containerID="05a52ecdbf485c6c724d9a992c69aca83958ea1704df0dac8409ddf6fbc7b4d1" Jan 29 17:16:53 crc kubenswrapper[4886]: I0129 17:16:53.147720 4886 scope.go:117] "RemoveContainer" containerID="e61c63ed7fdb0d740a758c779dfae1d17126672ffa65adff6cc5cd29f6bcc51c" Jan 29 17:17:03 crc kubenswrapper[4886]: I0129 17:17:03.616764 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:17:03 crc kubenswrapper[4886]: E0129 17:17:03.617316 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:17:15 crc kubenswrapper[4886]: I0129 17:17:15.056220 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p924n"] Jan 29 17:17:15 crc kubenswrapper[4886]: I0129 17:17:15.071053 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p924n"] Jan 29 17:17:16 crc kubenswrapper[4886]: I0129 17:17:16.615438 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:17:16 crc kubenswrapper[4886]: E0129 17:17:16.616781 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:17:16 crc kubenswrapper[4886]: I0129 17:17:16.646490 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68cdc6ed-ce63-43af-8502-b36cc0ae788a" path="/var/lib/kubelet/pods/68cdc6ed-ce63-43af-8502-b36cc0ae788a/volumes" Jan 29 17:17:18 crc kubenswrapper[4886]: I0129 17:17:18.057496 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8m2mm"] Jan 29 17:17:18 crc kubenswrapper[4886]: I0129 17:17:18.074083 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8m2mm"] Jan 29 17:17:18 crc kubenswrapper[4886]: I0129 17:17:18.629613 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8923ac96-087a-425b-a8b4-c09aa4be3d78" path="/var/lib/kubelet/pods/8923ac96-087a-425b-a8b4-c09aa4be3d78/volumes" Jan 29 17:17:30 crc kubenswrapper[4886]: I0129 17:17:30.617062 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:17:31 crc kubenswrapper[4886]: I0129 17:17:31.724381 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"bd2f023886beead4933eaa92185559b0b9421864121dccb5c51a6c3ddd9cce35"} Jan 29 17:17:34 crc kubenswrapper[4886]: I0129 17:17:34.055483 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-q2dxw"] Jan 29 17:17:34 crc kubenswrapper[4886]: I0129 17:17:34.068597 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-q2dxw"] Jan 29 17:17:34 crc kubenswrapper[4886]: I0129 17:17:34.631855 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb099fb-7bdb-4969-b3cb-6fc4ef498afd" path="/var/lib/kubelet/pods/ffb099fb-7bdb-4969-b3cb-6fc4ef498afd/volumes" Jan 29 17:17:38 crc kubenswrapper[4886]: I0129 17:17:38.058600 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-6nmwn"] Jan 29 17:17:38 crc kubenswrapper[4886]: I0129 17:17:38.070496 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-6nmwn"] Jan 29 17:17:38 crc kubenswrapper[4886]: I0129 17:17:38.627187 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0058f32-ae80-4dde-9dce-095c62f45979" path="/var/lib/kubelet/pods/a0058f32-ae80-4dde-9dce-095c62f45979/volumes" Jan 29 17:17:44 crc kubenswrapper[4886]: I0129 17:17:44.064787 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-j5gfz"] Jan 29 17:17:44 crc kubenswrapper[4886]: I0129 17:17:44.076522 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-j5gfz"] Jan 29 17:17:44 crc kubenswrapper[4886]: I0129 17:17:44.630418 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dae116-ceca-4588-9cba-1266bfa92caf" path="/var/lib/kubelet/pods/04dae116-ceca-4588-9cba-1266bfa92caf/volumes" Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.032997 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qglhp"] Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.041601 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qglhp"] Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.476040 4886 scope.go:117] "RemoveContainer" containerID="ab83d2d0c36aaea48832e86668e20e1d6f6f876644014c27f52bee83b6960b7d" Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.508729 4886 scope.go:117] "RemoveContainer" containerID="b56f617415d312996740dc4a8697ef643e749e77f4339179492aab6c12f2f0d4" Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.569593 4886 scope.go:117] "RemoveContainer" containerID="6375ad3e949f813db64562de4e61fa2910abcb717d2e211c509e5dbcb6b07f3a" Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.641159 4886 scope.go:117] "RemoveContainer" containerID="09a30c5dfcb3deacf09e3ccec1c515a8213db072a4cbe06ac44ba60b9a7d0159" Jan 29 17:17:53 crc kubenswrapper[4886]: I0129 17:17:53.691040 4886 scope.go:117] "RemoveContainer" containerID="462d0b69d42ff5bdae3194985f827b482bb0c2607dbc772e35d27e51d1171c94" Jan 29 17:17:54 crc kubenswrapper[4886]: I0129 17:17:54.627447 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43da0665-7e6a-4176-ae84-71128a89a243" path="/var/lib/kubelet/pods/43da0665-7e6a-4176-ae84-71128a89a243/volumes" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.187020 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bfhgd"] Jan 29 17:18:15 crc kubenswrapper[4886]: E0129 17:18:15.188138 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="extract-content" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.188152 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="extract-content" Jan 29 17:18:15 crc kubenswrapper[4886]: E0129 17:18:15.188186 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="registry-server" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.188194 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="registry-server" Jan 29 17:18:15 crc kubenswrapper[4886]: E0129 17:18:15.188207 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="extract-utilities" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.188215 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="extract-utilities" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.188469 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f773961-b526-4457-870c-ac299a3e3312" containerName="registry-server" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.190198 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.198057 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfhgd"] Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.338564 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpvb\" (UniqueName: \"kubernetes.io/projected/3de4fb0c-479a-43eb-bf0e-910c8993247d-kube-api-access-ddpvb\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.338730 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-utilities\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.338777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-catalog-content\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.440710 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-utilities\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.440793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-catalog-content\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.440925 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddpvb\" (UniqueName: \"kubernetes.io/projected/3de4fb0c-479a-43eb-bf0e-910c8993247d-kube-api-access-ddpvb\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.441398 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-utilities\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.441645 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-catalog-content\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.465748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddpvb\" (UniqueName: \"kubernetes.io/projected/3de4fb0c-479a-43eb-bf0e-910c8993247d-kube-api-access-ddpvb\") pod \"certified-operators-bfhgd\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:15 crc kubenswrapper[4886]: I0129 17:18:15.519742 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:18:16 crc kubenswrapper[4886]: I0129 17:18:16.082499 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfhgd"] Jan 29 17:18:16 crc kubenswrapper[4886]: I0129 17:18:16.240243 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerStarted","Data":"993d036c77151116f0104b5e52ac5de851cc76e020d62a74f76c3bbf77ef5ab3"} Jan 29 17:18:17 crc kubenswrapper[4886]: I0129 17:18:17.253784 4886 generic.go:334] "Generic (PLEG): container finished" podID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerID="7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523" exitCode=0 Jan 29 17:18:17 crc kubenswrapper[4886]: I0129 17:18:17.253863 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerDied","Data":"7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523"} Jan 29 17:18:17 crc kubenswrapper[4886]: E0129 17:18:17.672594 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:18:17 crc kubenswrapper[4886]: E0129 17:18:17.673016 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddpvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bfhgd_openshift-marketplace(3de4fb0c-479a-43eb-bf0e-910c8993247d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:18:17 crc kubenswrapper[4886]: E0129 17:18:17.676468 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:18:18 crc kubenswrapper[4886]: E0129 17:18:18.266964 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:18:24 crc kubenswrapper[4886]: I0129 17:18:24.045494 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-n9fr6"] Jan 29 17:18:24 crc kubenswrapper[4886]: I0129 17:18:24.056730 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-n9fr6"] Jan 29 17:18:24 crc kubenswrapper[4886]: I0129 17:18:24.633356 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6c4698-f001-402f-91e3-1e80bc7bf443" path="/var/lib/kubelet/pods/ea6c4698-f001-402f-91e3-1e80bc7bf443/volumes" Jan 29 17:18:25 crc kubenswrapper[4886]: I0129 17:18:25.031711 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6jmdx"] Jan 29 17:18:25 crc kubenswrapper[4886]: I0129 17:18:25.043054 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e9f-account-create-update-sdhth"] Jan 29 17:18:25 crc kubenswrapper[4886]: I0129 17:18:25.055649 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6jmdx"] Jan 29 17:18:25 crc kubenswrapper[4886]: I0129 17:18:25.066402 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4e9f-account-create-update-sdhth"] Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.040043 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f9c8-account-create-update-hcc42"] Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.050475 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vqrmb"] Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.060316 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f9c8-account-create-update-hcc42"] Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.069624 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vqrmb"] Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.631473 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abefc39-4eb0-4600-8e11-b5d4af3c11b4" path="/var/lib/kubelet/pods/0abefc39-4eb0-4600-8e11-b5d4af3c11b4/volumes" Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.632934 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8258df8a-fd9a-4546-8ea7-ce4b7f7180bb" path="/var/lib/kubelet/pods/8258df8a-fd9a-4546-8ea7-ce4b7f7180bb/volumes" Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.634188 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0772ac7-3374-4607-a644-f4ac2e1c078a" path="/var/lib/kubelet/pods/d0772ac7-3374-4607-a644-f4ac2e1c078a/volumes" Jan 29 17:18:26 crc kubenswrapper[4886]: I0129 17:18:26.635479 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13e59b2-0b15-4b7f-b158-ea16ec2b5416" path="/var/lib/kubelet/pods/d13e59b2-0b15-4b7f-b158-ea16ec2b5416/volumes" Jan 29 17:18:27 crc kubenswrapper[4886]: I0129 17:18:27.039653 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cc0e-account-create-update-nxk7k"] Jan 29 17:18:27 crc kubenswrapper[4886]: I0129 17:18:27.053471 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cc0e-account-create-update-nxk7k"] Jan 29 17:18:28 crc kubenswrapper[4886]: I0129 17:18:28.628464 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af00928-6484-4071-b739-bc211ac220ef" path="/var/lib/kubelet/pods/6af00928-6484-4071-b739-bc211ac220ef/volumes" Jan 29 17:18:31 crc kubenswrapper[4886]: E0129 17:18:31.783277 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:18:31 crc kubenswrapper[4886]: E0129 17:18:31.783958 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddpvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bfhgd_openshift-marketplace(3de4fb0c-479a-43eb-bf0e-910c8993247d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:18:31 crc kubenswrapper[4886]: E0129 17:18:31.785198 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:18:43 crc kubenswrapper[4886]: E0129 17:18:43.643656 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:18:53 crc kubenswrapper[4886]: I0129 17:18:53.847113 4886 scope.go:117] "RemoveContainer" containerID="55979afc492dd3730aa23e20e090c57835e6091af47e18bbcd87fee5afa8dde9" Jan 29 17:18:53 crc kubenswrapper[4886]: I0129 17:18:53.874058 4886 scope.go:117] "RemoveContainer" containerID="c4ce1f7996acaa4140e3f499ede2bc0c80a3f2eb7c1df999e0b4f5903e1d75cf" Jan 29 17:18:53 crc kubenswrapper[4886]: I0129 17:18:53.983639 4886 scope.go:117] "RemoveContainer" containerID="b398660f408eb077ec37e46aac34f95a01068c141577a940f5d64dfc4dc0b027" Jan 29 17:18:54 crc kubenswrapper[4886]: I0129 17:18:54.026043 4886 scope.go:117] "RemoveContainer" containerID="e03fdcc391c686ad6f7c447bf2012b345cc1a12adaddfc3b0b7fbabe7adbed61" Jan 29 17:18:54 crc kubenswrapper[4886]: I0129 17:18:54.078997 4886 scope.go:117] "RemoveContainer" containerID="e75acdd55522e91761ce2d771dbc17900e4f53d297811cf9623f07bc70ba7052" Jan 29 17:18:54 crc kubenswrapper[4886]: I0129 17:18:54.126780 4886 scope.go:117] "RemoveContainer" containerID="8cff761f0cac80358e499809ffa647d36a191c7af1a493dc00f71f33ae4223f1" Jan 29 17:18:54 crc kubenswrapper[4886]: I0129 17:18:54.172713 4886 scope.go:117] "RemoveContainer" containerID="92b4d1b2f475024d893ea29a83366ecc7f80ef2e9282821adbce174622472058" Jan 29 17:18:58 crc kubenswrapper[4886]: E0129 17:18:58.867103 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:18:58 crc kubenswrapper[4886]: E0129 17:18:58.867751 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddpvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bfhgd_openshift-marketplace(3de4fb0c-479a-43eb-bf0e-910c8993247d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:18:58 crc kubenswrapper[4886]: E0129 17:18:58.869754 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:19:01 crc kubenswrapper[4886]: I0129 17:19:01.057499 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c4q4z"] Jan 29 17:19:01 crc kubenswrapper[4886]: I0129 17:19:01.071611 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c4q4z"] Jan 29 17:19:02 crc kubenswrapper[4886]: I0129 17:19:02.629374 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c467eb7e-a553-4fc5-b366-607a30fe18dd" path="/var/lib/kubelet/pods/c467eb7e-a553-4fc5-b366-607a30fe18dd/volumes" Jan 29 17:19:11 crc kubenswrapper[4886]: E0129 17:19:11.617765 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:19:25 crc kubenswrapper[4886]: I0129 17:19:25.056006 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-60d5-account-create-update-w67hv"] Jan 29 17:19:25 crc kubenswrapper[4886]: I0129 17:19:25.081354 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-60d5-account-create-update-w67hv"] Jan 29 17:19:26 crc kubenswrapper[4886]: I0129 17:19:26.049206 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-6zh6p"] Jan 29 17:19:26 crc kubenswrapper[4886]: I0129 17:19:26.061470 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-6zh6p"] Jan 29 17:19:26 crc kubenswrapper[4886]: E0129 17:19:26.619458 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:19:26 crc kubenswrapper[4886]: I0129 17:19:26.689638 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323a490d-33e2-4411-8a77-c578f409ba28" path="/var/lib/kubelet/pods/323a490d-33e2-4411-8a77-c578f409ba28/volumes" Jan 29 17:19:26 crc kubenswrapper[4886]: I0129 17:19:26.690907 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6f2462-b78d-4619-9704-5cc67ae60974" path="/var/lib/kubelet/pods/ec6f2462-b78d-4619-9704-5cc67ae60974/volumes" Jan 29 17:19:28 crc kubenswrapper[4886]: I0129 17:19:28.030432 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqcf4"] Jan 29 17:19:28 crc kubenswrapper[4886]: I0129 17:19:28.046068 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-tqcf4"] Jan 29 17:19:28 crc kubenswrapper[4886]: I0129 17:19:28.633110 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cabf586-398a-45a9-80d6-2fd63d9e14e5" path="/var/lib/kubelet/pods/8cabf586-398a-45a9-80d6-2fd63d9e14e5/volumes" Jan 29 17:19:30 crc kubenswrapper[4886]: I0129 17:19:30.037520 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fznz7"] Jan 29 17:19:30 crc kubenswrapper[4886]: I0129 17:19:30.049680 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fznz7"] Jan 29 17:19:30 crc kubenswrapper[4886]: I0129 17:19:30.633286 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88a08b7-d54a-4414-b7f6-b490949d6b70" path="/var/lib/kubelet/pods/a88a08b7-d54a-4414-b7f6-b490949d6b70/volumes" Jan 29 17:19:39 crc kubenswrapper[4886]: E0129 17:19:39.750751 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:19:39 crc kubenswrapper[4886]: E0129 17:19:39.751420 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddpvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bfhgd_openshift-marketplace(3de4fb0c-479a-43eb-bf0e-910c8993247d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:19:39 crc kubenswrapper[4886]: E0129 17:19:39.753043 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:19:53 crc kubenswrapper[4886]: E0129 17:19:53.618503 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:19:54 crc kubenswrapper[4886]: I0129 17:19:54.345308 4886 scope.go:117] "RemoveContainer" containerID="b0c7be4a8a6f220b0bc62ecd7ce7d07cb8b17e5644962c70a9a466af1717c6ce" Jan 29 17:19:54 crc kubenswrapper[4886]: I0129 17:19:54.376768 4886 scope.go:117] "RemoveContainer" containerID="94c431dc7f3dd6c3f091efc6b5f4191b950083388e1ef0390fd70fcd7a85128c" Jan 29 17:19:54 crc kubenswrapper[4886]: I0129 17:19:54.443480 4886 scope.go:117] "RemoveContainer" containerID="2e1c0eadae73024c2cb0f70a58a6f4f7d1a81518c1e179c7358b1ee70d254152" Jan 29 17:19:54 crc kubenswrapper[4886]: I0129 17:19:54.501727 4886 scope.go:117] "RemoveContainer" containerID="b316bbc4bed9ea6d21a1f48ac1daf91a604e958e8664a1c95a0d70b2476abcfa" Jan 29 17:19:54 crc kubenswrapper[4886]: I0129 17:19:54.583672 4886 scope.go:117] "RemoveContainer" containerID="d6960d602147a760f370e0aaeba322f8c53999b050075e5ef6c33ecafc0b7928" Jan 29 17:19:59 crc kubenswrapper[4886]: I0129 17:19:59.660808 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:19:59 crc kubenswrapper[4886]: I0129 17:19:59.661818 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:20:04 crc kubenswrapper[4886]: E0129 17:20:04.620272 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:20:12 crc kubenswrapper[4886]: I0129 17:20:12.048085 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ddfqz"] Jan 29 17:20:12 crc kubenswrapper[4886]: I0129 17:20:12.060117 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ddfqz"] Jan 29 17:20:12 crc kubenswrapper[4886]: I0129 17:20:12.629647 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1c51cd-f91d-406b-815c-00879a9d6401" path="/var/lib/kubelet/pods/7a1c51cd-f91d-406b-815c-00879a9d6401/volumes" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.674473 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlgkv"] Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.677196 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.691961 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlgkv"] Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.748150 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6n2h\" (UniqueName: \"kubernetes.io/projected/75397189-e390-4b5d-bb9d-3017be63794e-kube-api-access-g6n2h\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.748480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-catalog-content\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.749349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-utilities\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.852941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-catalog-content\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.853261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-utilities\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.853381 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6n2h\" (UniqueName: \"kubernetes.io/projected/75397189-e390-4b5d-bb9d-3017be63794e-kube-api-access-g6n2h\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.853872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-catalog-content\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.853985 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-utilities\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:14 crc kubenswrapper[4886]: I0129 17:20:14.874119 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6n2h\" (UniqueName: \"kubernetes.io/projected/75397189-e390-4b5d-bb9d-3017be63794e-kube-api-access-g6n2h\") pod \"community-operators-vlgkv\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:15 crc kubenswrapper[4886]: I0129 17:20:15.004706 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:20:15 crc kubenswrapper[4886]: I0129 17:20:15.592745 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlgkv"] Jan 29 17:20:15 crc kubenswrapper[4886]: E0129 17:20:15.625154 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:20:16 crc kubenswrapper[4886]: I0129 17:20:16.556879 4886 generic.go:334] "Generic (PLEG): container finished" podID="75397189-e390-4b5d-bb9d-3017be63794e" containerID="50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c" exitCode=0 Jan 29 17:20:16 crc kubenswrapper[4886]: I0129 17:20:16.556997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerDied","Data":"50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c"} Jan 29 17:20:16 crc kubenswrapper[4886]: I0129 17:20:16.557248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerStarted","Data":"3014185eacc0527fb4588d33782092cb9980b118f2b2053ba0af25fe3485682c"} Jan 29 17:20:16 crc kubenswrapper[4886]: E0129 17:20:16.695075 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:20:16 crc kubenswrapper[4886]: E0129 17:20:16.695509 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6n2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vlgkv_openshift-marketplace(75397189-e390-4b5d-bb9d-3017be63794e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:20:16 crc kubenswrapper[4886]: E0129 17:20:16.696754 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:20:17 crc kubenswrapper[4886]: E0129 17:20:17.572408 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:20:27 crc kubenswrapper[4886]: E0129 17:20:27.617832 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:20:29 crc kubenswrapper[4886]: I0129 17:20:29.661071 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:20:29 crc kubenswrapper[4886]: I0129 17:20:29.661594 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:20:30 crc kubenswrapper[4886]: E0129 17:20:30.768243 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:20:30 crc kubenswrapper[4886]: E0129 17:20:30.768701 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6n2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vlgkv_openshift-marketplace(75397189-e390-4b5d-bb9d-3017be63794e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:20:30 crc kubenswrapper[4886]: E0129 17:20:30.769881 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:20:41 crc kubenswrapper[4886]: E0129 17:20:41.618127 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:20:45 crc kubenswrapper[4886]: E0129 17:20:45.618487 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:20:52 crc kubenswrapper[4886]: E0129 17:20:52.618387 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" Jan 29 17:20:54 crc kubenswrapper[4886]: I0129 17:20:54.738729 4886 scope.go:117] "RemoveContainer" containerID="5be86521758fe7c03f20fd8b758e10774f421701b95693128fa47b2a2e5adc70" Jan 29 17:20:56 crc kubenswrapper[4886]: I0129 17:20:56.617300 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:20:56 crc kubenswrapper[4886]: E0129 17:20:56.746594 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:20:56 crc kubenswrapper[4886]: E0129 17:20:56.747705 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6n2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vlgkv_openshift-marketplace(75397189-e390-4b5d-bb9d-3017be63794e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:20:56 crc kubenswrapper[4886]: E0129 17:20:56.749298 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:20:59 crc kubenswrapper[4886]: I0129 17:20:59.660660 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:20:59 crc kubenswrapper[4886]: I0129 17:20:59.661197 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:20:59 crc kubenswrapper[4886]: I0129 17:20:59.661241 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:20:59 crc kubenswrapper[4886]: I0129 17:20:59.662162 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd2f023886beead4933eaa92185559b0b9421864121dccb5c51a6c3ddd9cce35"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:20:59 crc kubenswrapper[4886]: I0129 17:20:59.662223 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://bd2f023886beead4933eaa92185559b0b9421864121dccb5c51a6c3ddd9cce35" gracePeriod=600 Jan 29 17:21:00 crc kubenswrapper[4886]: I0129 17:21:00.053436 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="bd2f023886beead4933eaa92185559b0b9421864121dccb5c51a6c3ddd9cce35" exitCode=0 Jan 29 17:21:00 crc kubenswrapper[4886]: I0129 17:21:00.053512 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"bd2f023886beead4933eaa92185559b0b9421864121dccb5c51a6c3ddd9cce35"} Jan 29 17:21:00 crc kubenswrapper[4886]: I0129 17:21:00.053861 4886 scope.go:117] "RemoveContainer" containerID="37523dcabcb104a05e3a585e6aacd7a7633efd02b8c8e5f7dd95e23d0d43f05d" Jan 29 17:21:01 crc kubenswrapper[4886]: I0129 17:21:01.069186 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b"} Jan 29 17:21:06 crc kubenswrapper[4886]: I0129 17:21:06.123943 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerStarted","Data":"23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17"} Jan 29 17:21:07 crc kubenswrapper[4886]: I0129 17:21:07.142734 4886 generic.go:334] "Generic (PLEG): container finished" podID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerID="23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17" exitCode=0 Jan 29 17:21:07 crc kubenswrapper[4886]: I0129 17:21:07.142846 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerDied","Data":"23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17"} Jan 29 17:21:08 crc kubenswrapper[4886]: I0129 17:21:08.156027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerStarted","Data":"acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514"} Jan 29 17:21:08 crc kubenswrapper[4886]: I0129 17:21:08.187977 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bfhgd" podStartSLOduration=2.903105783 podStartE2EDuration="2m53.187923943s" podCreationTimestamp="2026-01-29 17:18:15 +0000 UTC" firstStartedPulling="2026-01-29 17:18:17.256204503 +0000 UTC m=+3380.164923795" lastFinishedPulling="2026-01-29 17:21:07.541022683 +0000 UTC m=+3550.449741955" observedRunningTime="2026-01-29 17:21:08.180529292 +0000 UTC m=+3551.089248584" watchObservedRunningTime="2026-01-29 17:21:08.187923943 +0000 UTC m=+3551.096643225" Jan 29 17:21:09 crc kubenswrapper[4886]: E0129 17:21:09.616476 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:21:15 crc kubenswrapper[4886]: I0129 17:21:15.520373 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:21:15 crc kubenswrapper[4886]: I0129 17:21:15.521053 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:21:15 crc kubenswrapper[4886]: I0129 17:21:15.565744 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:21:16 crc kubenswrapper[4886]: I0129 17:21:16.278178 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:21:16 crc kubenswrapper[4886]: I0129 17:21:16.341296 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfhgd"] Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.246779 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bfhgd" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="registry-server" containerID="cri-o://acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514" gracePeriod=2 Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.806479 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.902299 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-utilities\") pod \"3de4fb0c-479a-43eb-bf0e-910c8993247d\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.902517 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-catalog-content\") pod \"3de4fb0c-479a-43eb-bf0e-910c8993247d\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.902629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddpvb\" (UniqueName: \"kubernetes.io/projected/3de4fb0c-479a-43eb-bf0e-910c8993247d-kube-api-access-ddpvb\") pod \"3de4fb0c-479a-43eb-bf0e-910c8993247d\" (UID: \"3de4fb0c-479a-43eb-bf0e-910c8993247d\") " Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.903253 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-utilities" (OuterVolumeSpecName: "utilities") pod "3de4fb0c-479a-43eb-bf0e-910c8993247d" (UID: "3de4fb0c-479a-43eb-bf0e-910c8993247d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.903607 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.909090 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de4fb0c-479a-43eb-bf0e-910c8993247d-kube-api-access-ddpvb" (OuterVolumeSpecName: "kube-api-access-ddpvb") pod "3de4fb0c-479a-43eb-bf0e-910c8993247d" (UID: "3de4fb0c-479a-43eb-bf0e-910c8993247d"). InnerVolumeSpecName "kube-api-access-ddpvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:21:18 crc kubenswrapper[4886]: I0129 17:21:18.965570 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3de4fb0c-479a-43eb-bf0e-910c8993247d" (UID: "3de4fb0c-479a-43eb-bf0e-910c8993247d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.006263 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de4fb0c-479a-43eb-bf0e-910c8993247d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.006551 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddpvb\" (UniqueName: \"kubernetes.io/projected/3de4fb0c-479a-43eb-bf0e-910c8993247d-kube-api-access-ddpvb\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.266883 4886 generic.go:334] "Generic (PLEG): container finished" podID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerID="acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514" exitCode=0 Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.266944 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerDied","Data":"acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514"} Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.266993 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfhgd" event={"ID":"3de4fb0c-479a-43eb-bf0e-910c8993247d","Type":"ContainerDied","Data":"993d036c77151116f0104b5e52ac5de851cc76e020d62a74f76c3bbf77ef5ab3"} Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.267022 4886 scope.go:117] "RemoveContainer" containerID="acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.268387 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfhgd" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.300742 4886 scope.go:117] "RemoveContainer" containerID="23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.336342 4886 scope.go:117] "RemoveContainer" containerID="7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.340728 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfhgd"] Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.367221 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bfhgd"] Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.393824 4886 scope.go:117] "RemoveContainer" containerID="acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514" Jan 29 17:21:19 crc kubenswrapper[4886]: E0129 17:21:19.394415 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514\": container with ID starting with acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514 not found: ID does not exist" containerID="acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.394455 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514"} err="failed to get container status \"acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514\": rpc error: code = NotFound desc = could not find container \"acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514\": container with ID starting with acb662959a37da402dea77491374e233bfdd0a622e4f08294b5de2e093497514 not found: ID does not exist" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.394484 4886 scope.go:117] "RemoveContainer" containerID="23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17" Jan 29 17:21:19 crc kubenswrapper[4886]: E0129 17:21:19.394773 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17\": container with ID starting with 23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17 not found: ID does not exist" containerID="23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.394805 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17"} err="failed to get container status \"23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17\": rpc error: code = NotFound desc = could not find container \"23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17\": container with ID starting with 23137d1dfd07eb4543914832d6fbec9b81563f5df4b7e520d96f009aed078d17 not found: ID does not exist" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.394824 4886 scope.go:117] "RemoveContainer" containerID="7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523" Jan 29 17:21:19 crc kubenswrapper[4886]: E0129 17:21:19.395112 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523\": container with ID starting with 7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523 not found: ID does not exist" containerID="7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523" Jan 29 17:21:19 crc kubenswrapper[4886]: I0129 17:21:19.395140 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523"} err="failed to get container status \"7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523\": rpc error: code = NotFound desc = could not find container \"7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523\": container with ID starting with 7b334ee63888db455be0d61b260b626dbcfd228221eee73d28ea7fa18d022523 not found: ID does not exist" Jan 29 17:21:20 crc kubenswrapper[4886]: I0129 17:21:20.631189 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" path="/var/lib/kubelet/pods/3de4fb0c-479a-43eb-bf0e-910c8993247d/volumes" Jan 29 17:21:23 crc kubenswrapper[4886]: E0129 17:21:23.619197 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:21:36 crc kubenswrapper[4886]: E0129 17:21:36.619659 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" Jan 29 17:21:49 crc kubenswrapper[4886]: I0129 17:21:49.635764 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerStarted","Data":"77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3"} Jan 29 17:21:51 crc kubenswrapper[4886]: I0129 17:21:51.679975 4886 generic.go:334] "Generic (PLEG): container finished" podID="75397189-e390-4b5d-bb9d-3017be63794e" containerID="77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3" exitCode=0 Jan 29 17:21:51 crc kubenswrapper[4886]: I0129 17:21:51.680031 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerDied","Data":"77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3"} Jan 29 17:21:52 crc kubenswrapper[4886]: I0129 17:21:52.697735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerStarted","Data":"c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a"} Jan 29 17:21:52 crc kubenswrapper[4886]: I0129 17:21:52.723310 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlgkv" podStartSLOduration=3.141950495 podStartE2EDuration="1m38.723288463s" podCreationTimestamp="2026-01-29 17:20:14 +0000 UTC" firstStartedPulling="2026-01-29 17:20:16.560126085 +0000 UTC m=+3499.468845387" lastFinishedPulling="2026-01-29 17:21:52.141464083 +0000 UTC m=+3595.050183355" observedRunningTime="2026-01-29 17:21:52.71477829 +0000 UTC m=+3595.623497592" watchObservedRunningTime="2026-01-29 17:21:52.723288463 +0000 UTC m=+3595.632007735" Jan 29 17:21:55 crc kubenswrapper[4886]: I0129 17:21:55.005509 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:21:55 crc kubenswrapper[4886]: I0129 17:21:55.005788 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:21:55 crc kubenswrapper[4886]: I0129 17:21:55.054819 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:22:05 crc kubenswrapper[4886]: I0129 17:22:05.099433 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:22:05 crc kubenswrapper[4886]: I0129 17:22:05.151041 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlgkv"] Jan 29 17:22:05 crc kubenswrapper[4886]: I0129 17:22:05.841097 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlgkv" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="registry-server" containerID="cri-o://c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a" gracePeriod=2 Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.460550 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.594133 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6n2h\" (UniqueName: \"kubernetes.io/projected/75397189-e390-4b5d-bb9d-3017be63794e-kube-api-access-g6n2h\") pod \"75397189-e390-4b5d-bb9d-3017be63794e\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.594241 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-utilities\") pod \"75397189-e390-4b5d-bb9d-3017be63794e\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.594552 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-catalog-content\") pod \"75397189-e390-4b5d-bb9d-3017be63794e\" (UID: \"75397189-e390-4b5d-bb9d-3017be63794e\") " Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.596002 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-utilities" (OuterVolumeSpecName: "utilities") pod "75397189-e390-4b5d-bb9d-3017be63794e" (UID: "75397189-e390-4b5d-bb9d-3017be63794e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.603703 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75397189-e390-4b5d-bb9d-3017be63794e-kube-api-access-g6n2h" (OuterVolumeSpecName: "kube-api-access-g6n2h") pod "75397189-e390-4b5d-bb9d-3017be63794e" (UID: "75397189-e390-4b5d-bb9d-3017be63794e"). InnerVolumeSpecName "kube-api-access-g6n2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.652028 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75397189-e390-4b5d-bb9d-3017be63794e" (UID: "75397189-e390-4b5d-bb9d-3017be63794e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.698536 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.698580 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6n2h\" (UniqueName: \"kubernetes.io/projected/75397189-e390-4b5d-bb9d-3017be63794e-kube-api-access-g6n2h\") on node \"crc\" DevicePath \"\"" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.698593 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75397189-e390-4b5d-bb9d-3017be63794e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.881214 4886 generic.go:334] "Generic (PLEG): container finished" podID="75397189-e390-4b5d-bb9d-3017be63794e" containerID="c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a" exitCode=0 Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.881275 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerDied","Data":"c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a"} Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.881309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlgkv" event={"ID":"75397189-e390-4b5d-bb9d-3017be63794e","Type":"ContainerDied","Data":"3014185eacc0527fb4588d33782092cb9980b118f2b2053ba0af25fe3485682c"} Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.881349 4886 scope.go:117] "RemoveContainer" containerID="c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.881745 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlgkv" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.908658 4886 scope.go:117] "RemoveContainer" containerID="77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.927840 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlgkv"] Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.936599 4886 scope.go:117] "RemoveContainer" containerID="50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.939904 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlgkv"] Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.995485 4886 scope.go:117] "RemoveContainer" containerID="c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a" Jan 29 17:22:06 crc kubenswrapper[4886]: E0129 17:22:06.996256 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a\": container with ID starting with c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a not found: ID does not exist" containerID="c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.996303 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a"} err="failed to get container status \"c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a\": rpc error: code = NotFound desc = could not find container \"c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a\": container with ID starting with c5d7748afbf0374cd560d960e67108fce3b5d85a4dc5d8649cbc28214002142a not found: ID does not exist" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.996348 4886 scope.go:117] "RemoveContainer" containerID="77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3" Jan 29 17:22:06 crc kubenswrapper[4886]: E0129 17:22:06.997024 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3\": container with ID starting with 77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3 not found: ID does not exist" containerID="77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.997215 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3"} err="failed to get container status \"77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3\": rpc error: code = NotFound desc = could not find container \"77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3\": container with ID starting with 77e9f26b3d74ceb7b80a6b0256c506671555f76c7435cc01ce88b73443d5caf3 not found: ID does not exist" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.997413 4886 scope.go:117] "RemoveContainer" containerID="50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c" Jan 29 17:22:06 crc kubenswrapper[4886]: E0129 17:22:06.998530 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c\": container with ID starting with 50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c not found: ID does not exist" containerID="50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c" Jan 29 17:22:06 crc kubenswrapper[4886]: I0129 17:22:06.998577 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c"} err="failed to get container status \"50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c\": rpc error: code = NotFound desc = could not find container \"50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c\": container with ID starting with 50e7d409e21eaec1e565da5ff686d38148bb5fcc53234f8118461f6f78ce385c not found: ID does not exist" Jan 29 17:22:08 crc kubenswrapper[4886]: I0129 17:22:08.627240 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75397189-e390-4b5d-bb9d-3017be63794e" path="/var/lib/kubelet/pods/75397189-e390-4b5d-bb9d-3017be63794e/volumes" Jan 29 17:23:17 crc kubenswrapper[4886]: I0129 17:23:17.545686 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f458794ff-v7p92" podUID="79c81ef9-65c7-4372-9a47-8ed93521eadf" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 29 17:23:29 crc kubenswrapper[4886]: I0129 17:23:29.661268 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:23:29 crc kubenswrapper[4886]: I0129 17:23:29.662070 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:23:59 crc kubenswrapper[4886]: I0129 17:23:59.660654 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:23:59 crc kubenswrapper[4886]: I0129 17:23:59.661247 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:24:29 crc kubenswrapper[4886]: I0129 17:24:29.661084 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:24:29 crc kubenswrapper[4886]: I0129 17:24:29.661823 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:24:29 crc kubenswrapper[4886]: I0129 17:24:29.661903 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:24:29 crc kubenswrapper[4886]: I0129 17:24:29.662902 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:24:29 crc kubenswrapper[4886]: I0129 17:24:29.662987 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" gracePeriod=600 Jan 29 17:24:29 crc kubenswrapper[4886]: E0129 17:24:29.819763 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:24:30 crc kubenswrapper[4886]: I0129 17:24:30.496221 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" exitCode=0 Jan 29 17:24:30 crc kubenswrapper[4886]: I0129 17:24:30.496285 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b"} Jan 29 17:24:30 crc kubenswrapper[4886]: I0129 17:24:30.496398 4886 scope.go:117] "RemoveContainer" containerID="bd2f023886beead4933eaa92185559b0b9421864121dccb5c51a6c3ddd9cce35" Jan 29 17:24:30 crc kubenswrapper[4886]: I0129 17:24:30.497421 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:24:30 crc kubenswrapper[4886]: E0129 17:24:30.497820 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:24:41 crc kubenswrapper[4886]: I0129 17:24:41.615939 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:24:41 crc kubenswrapper[4886]: E0129 17:24:41.617394 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:24:52 crc kubenswrapper[4886]: I0129 17:24:52.615530 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:24:52 crc kubenswrapper[4886]: E0129 17:24:52.617065 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:25:07 crc kubenswrapper[4886]: I0129 17:25:07.615618 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:25:07 crc kubenswrapper[4886]: E0129 17:25:07.616959 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:25:18 crc kubenswrapper[4886]: I0129 17:25:18.631250 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:25:18 crc kubenswrapper[4886]: E0129 17:25:18.632501 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:25:30 crc kubenswrapper[4886]: I0129 17:25:30.616055 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:25:30 crc kubenswrapper[4886]: E0129 17:25:30.617512 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:25:42 crc kubenswrapper[4886]: I0129 17:25:42.616122 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:25:42 crc kubenswrapper[4886]: E0129 17:25:42.617548 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:25:55 crc kubenswrapper[4886]: I0129 17:25:55.615791 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:25:55 crc kubenswrapper[4886]: E0129 17:25:55.616828 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:26:10 crc kubenswrapper[4886]: I0129 17:26:10.616448 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:26:10 crc kubenswrapper[4886]: E0129 17:26:10.617273 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:26:24 crc kubenswrapper[4886]: I0129 17:26:24.616002 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:26:24 crc kubenswrapper[4886]: E0129 17:26:24.616782 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:26:35 crc kubenswrapper[4886]: I0129 17:26:35.616358 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:26:35 crc kubenswrapper[4886]: E0129 17:26:35.618670 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:26:49 crc kubenswrapper[4886]: I0129 17:26:49.615377 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:26:49 crc kubenswrapper[4886]: E0129 17:26:49.616044 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:27:02 crc kubenswrapper[4886]: I0129 17:27:02.614738 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:27:02 crc kubenswrapper[4886]: E0129 17:27:02.615558 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:27:15 crc kubenswrapper[4886]: I0129 17:27:15.615472 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:27:15 crc kubenswrapper[4886]: E0129 17:27:15.618681 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:27:26 crc kubenswrapper[4886]: I0129 17:27:26.618162 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:27:26 crc kubenswrapper[4886]: E0129 17:27:26.620005 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:27:37 crc kubenswrapper[4886]: I0129 17:27:37.618624 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:27:37 crc kubenswrapper[4886]: E0129 17:27:37.620231 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:27:52 crc kubenswrapper[4886]: I0129 17:27:52.615899 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:27:52 crc kubenswrapper[4886]: E0129 17:27:52.616747 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.829697 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cvqft"] Jan 29 17:27:54 crc kubenswrapper[4886]: E0129 17:27:54.830494 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="extract-content" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830506 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="extract-content" Jan 29 17:27:54 crc kubenswrapper[4886]: E0129 17:27:54.830531 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="extract-utilities" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830537 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="extract-utilities" Jan 29 17:27:54 crc kubenswrapper[4886]: E0129 17:27:54.830548 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="extract-utilities" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830554 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="extract-utilities" Jan 29 17:27:54 crc kubenswrapper[4886]: E0129 17:27:54.830566 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="registry-server" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830572 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="registry-server" Jan 29 17:27:54 crc kubenswrapper[4886]: E0129 17:27:54.830596 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="extract-content" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830602 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="extract-content" Jan 29 17:27:54 crc kubenswrapper[4886]: E0129 17:27:54.830610 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="registry-server" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830615 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="registry-server" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830804 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de4fb0c-479a-43eb-bf0e-910c8993247d" containerName="registry-server" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.830823 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="75397189-e390-4b5d-bb9d-3017be63794e" containerName="registry-server" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.832317 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.854928 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvqft"] Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.941072 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgnp6\" (UniqueName: \"kubernetes.io/projected/bd300ccf-3376-4861-bcae-bf7e7310ab20-kube-api-access-wgnp6\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.941160 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-catalog-content\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:54 crc kubenswrapper[4886]: I0129 17:27:54.941193 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-utilities\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.043414 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgnp6\" (UniqueName: \"kubernetes.io/projected/bd300ccf-3376-4861-bcae-bf7e7310ab20-kube-api-access-wgnp6\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.043536 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-catalog-content\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.043593 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-utilities\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.044096 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-catalog-content\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.044191 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-utilities\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.076384 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgnp6\" (UniqueName: \"kubernetes.io/projected/bd300ccf-3376-4861-bcae-bf7e7310ab20-kube-api-access-wgnp6\") pod \"redhat-operators-cvqft\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.153301 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:27:55 crc kubenswrapper[4886]: I0129 17:27:55.671789 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvqft"] Jan 29 17:27:56 crc kubenswrapper[4886]: I0129 17:27:56.251090 4886 generic.go:334] "Generic (PLEG): container finished" podID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerID="2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad" exitCode=0 Jan 29 17:27:56 crc kubenswrapper[4886]: I0129 17:27:56.251138 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerDied","Data":"2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad"} Jan 29 17:27:56 crc kubenswrapper[4886]: I0129 17:27:56.251184 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerStarted","Data":"ac306b50644c0ba93e28d27ffe560102b7472fe91bb542b8c5a074fde1b9d833"} Jan 29 17:27:56 crc kubenswrapper[4886]: I0129 17:27:56.253441 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:27:56 crc kubenswrapper[4886]: E0129 17:27:56.378914 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:27:56 crc kubenswrapper[4886]: E0129 17:27:56.379622 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgnp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cvqft_openshift-marketplace(bd300ccf-3376-4861-bcae-bf7e7310ab20): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:27:56 crc kubenswrapper[4886]: E0129 17:27:56.380942 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-cvqft" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" Jan 29 17:27:57 crc kubenswrapper[4886]: E0129 17:27:57.264474 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cvqft" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" Jan 29 17:28:05 crc kubenswrapper[4886]: I0129 17:28:05.615419 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:28:05 crc kubenswrapper[4886]: E0129 17:28:05.616118 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:28:12 crc kubenswrapper[4886]: I0129 17:28:12.440508 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerStarted","Data":"256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f"} Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.070678 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k4wq2"] Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.073428 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.084166 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4wq2"] Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.120061 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-catalog-content\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.120157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-utilities\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.120192 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96btq\" (UniqueName: \"kubernetes.io/projected/05cce123-7c5e-4254-b4af-53d0a93b2087-kube-api-access-96btq\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.222341 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-catalog-content\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.222452 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-utilities\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.222491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96btq\" (UniqueName: \"kubernetes.io/projected/05cce123-7c5e-4254-b4af-53d0a93b2087-kube-api-access-96btq\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.223003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-catalog-content\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.223027 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-utilities\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.245981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96btq\" (UniqueName: \"kubernetes.io/projected/05cce123-7c5e-4254-b4af-53d0a93b2087-kube-api-access-96btq\") pod \"redhat-marketplace-k4wq2\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:13 crc kubenswrapper[4886]: I0129 17:28:13.430065 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:14 crc kubenswrapper[4886]: I0129 17:28:14.015274 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4wq2"] Jan 29 17:28:14 crc kubenswrapper[4886]: I0129 17:28:14.459652 4886 generic.go:334] "Generic (PLEG): container finished" podID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerID="842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf" exitCode=0 Jan 29 17:28:14 crc kubenswrapper[4886]: I0129 17:28:14.459713 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerDied","Data":"842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf"} Jan 29 17:28:14 crc kubenswrapper[4886]: I0129 17:28:14.459940 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerStarted","Data":"61a0b584afbf6481ef8bc0dbb3bd55f9512c81522023b6b6c81f7237e81d868f"} Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.474043 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8ddvd"] Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.483955 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.492212 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ddvd"] Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.594113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-catalog-content\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.594195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qz6\" (UniqueName: \"kubernetes.io/projected/35a75b14-10dc-482f-9b03-be71a8b0bfd4-kube-api-access-s5qz6\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.594473 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-utilities\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.696966 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-catalog-content\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.697033 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qz6\" (UniqueName: \"kubernetes.io/projected/35a75b14-10dc-482f-9b03-be71a8b0bfd4-kube-api-access-s5qz6\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.697193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-utilities\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.697753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-utilities\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.698410 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-catalog-content\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.730581 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qz6\" (UniqueName: \"kubernetes.io/projected/35a75b14-10dc-482f-9b03-be71a8b0bfd4-kube-api-access-s5qz6\") pod \"certified-operators-8ddvd\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:15 crc kubenswrapper[4886]: I0129 17:28:15.854344 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:16 crc kubenswrapper[4886]: I0129 17:28:16.423121 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ddvd"] Jan 29 17:28:16 crc kubenswrapper[4886]: I0129 17:28:16.488319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerStarted","Data":"ae8c7924702c370ac40c8e4b953e0e47962503187d56a562a79b94f682fa85c7"} Jan 29 17:28:16 crc kubenswrapper[4886]: I0129 17:28:16.500658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerStarted","Data":"253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394"} Jan 29 17:28:17 crc kubenswrapper[4886]: I0129 17:28:17.513387 4886 generic.go:334] "Generic (PLEG): container finished" podID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerID="743f0e0c8bd0dfe8ae38c7f2d03a8981e74ea3dba06a6339a6bd917fe57aa8e9" exitCode=0 Jan 29 17:28:17 crc kubenswrapper[4886]: I0129 17:28:17.513464 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerDied","Data":"743f0e0c8bd0dfe8ae38c7f2d03a8981e74ea3dba06a6339a6bd917fe57aa8e9"} Jan 29 17:28:17 crc kubenswrapper[4886]: I0129 17:28:17.615744 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:28:17 crc kubenswrapper[4886]: E0129 17:28:17.616209 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:28:18 crc kubenswrapper[4886]: I0129 17:28:18.540958 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerStarted","Data":"642ddcf7d24f5ba4de7f4cfe5021d1c82bdda14e5ce39e790d42f342b92ed808"} Jan 29 17:28:18 crc kubenswrapper[4886]: I0129 17:28:18.545205 4886 generic.go:334] "Generic (PLEG): container finished" podID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerID="253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394" exitCode=0 Jan 29 17:28:18 crc kubenswrapper[4886]: I0129 17:28:18.545415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerDied","Data":"253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394"} Jan 29 17:28:19 crc kubenswrapper[4886]: I0129 17:28:19.562219 4886 generic.go:334] "Generic (PLEG): container finished" podID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerID="256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f" exitCode=0 Jan 29 17:28:19 crc kubenswrapper[4886]: I0129 17:28:19.564214 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerDied","Data":"256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f"} Jan 29 17:28:20 crc kubenswrapper[4886]: I0129 17:28:20.577137 4886 generic.go:334] "Generic (PLEG): container finished" podID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerID="642ddcf7d24f5ba4de7f4cfe5021d1c82bdda14e5ce39e790d42f342b92ed808" exitCode=0 Jan 29 17:28:20 crc kubenswrapper[4886]: I0129 17:28:20.577535 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerDied","Data":"642ddcf7d24f5ba4de7f4cfe5021d1c82bdda14e5ce39e790d42f342b92ed808"} Jan 29 17:28:20 crc kubenswrapper[4886]: I0129 17:28:20.585965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerStarted","Data":"3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f"} Jan 29 17:28:20 crc kubenswrapper[4886]: I0129 17:28:20.592476 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerStarted","Data":"4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3"} Jan 29 17:28:20 crc kubenswrapper[4886]: I0129 17:28:20.640005 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cvqft" podStartSLOduration=2.780366371 podStartE2EDuration="26.639982947s" podCreationTimestamp="2026-01-29 17:27:54 +0000 UTC" firstStartedPulling="2026-01-29 17:27:56.253144087 +0000 UTC m=+3959.161863359" lastFinishedPulling="2026-01-29 17:28:20.112760623 +0000 UTC m=+3983.021479935" observedRunningTime="2026-01-29 17:28:20.630438914 +0000 UTC m=+3983.539158226" watchObservedRunningTime="2026-01-29 17:28:20.639982947 +0000 UTC m=+3983.548702219" Jan 29 17:28:20 crc kubenswrapper[4886]: I0129 17:28:20.672093 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k4wq2" podStartSLOduration=3.103068192 podStartE2EDuration="7.672071616s" podCreationTimestamp="2026-01-29 17:28:13 +0000 UTC" firstStartedPulling="2026-01-29 17:28:14.461984219 +0000 UTC m=+3977.370703491" lastFinishedPulling="2026-01-29 17:28:19.030987633 +0000 UTC m=+3981.939706915" observedRunningTime="2026-01-29 17:28:20.651219309 +0000 UTC m=+3983.559938591" watchObservedRunningTime="2026-01-29 17:28:20.672071616 +0000 UTC m=+3983.580790888" Jan 29 17:28:22 crc kubenswrapper[4886]: I0129 17:28:22.630608 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerStarted","Data":"eb2c0e2ba022ed5bbe2b78ba5d991d3803db9fdfe00af6d0c6e96716e4b2a750"} Jan 29 17:28:22 crc kubenswrapper[4886]: I0129 17:28:22.668393 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8ddvd" podStartSLOduration=3.716718622 podStartE2EDuration="7.668367332s" podCreationTimestamp="2026-01-29 17:28:15 +0000 UTC" firstStartedPulling="2026-01-29 17:28:17.515946556 +0000 UTC m=+3980.424665838" lastFinishedPulling="2026-01-29 17:28:21.467595276 +0000 UTC m=+3984.376314548" observedRunningTime="2026-01-29 17:28:22.655058691 +0000 UTC m=+3985.563777963" watchObservedRunningTime="2026-01-29 17:28:22.668367332 +0000 UTC m=+3985.577086624" Jan 29 17:28:23 crc kubenswrapper[4886]: I0129 17:28:23.430605 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:23 crc kubenswrapper[4886]: I0129 17:28:23.431820 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:23 crc kubenswrapper[4886]: I0129 17:28:23.503412 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:25 crc kubenswrapper[4886]: I0129 17:28:25.154491 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:28:25 crc kubenswrapper[4886]: I0129 17:28:25.154986 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:28:25 crc kubenswrapper[4886]: I0129 17:28:25.855287 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:25 crc kubenswrapper[4886]: I0129 17:28:25.855361 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:25 crc kubenswrapper[4886]: I0129 17:28:25.900002 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:26 crc kubenswrapper[4886]: I0129 17:28:26.210575 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cvqft" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="registry-server" probeResult="failure" output=< Jan 29 17:28:26 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:28:26 crc kubenswrapper[4886]: > Jan 29 17:28:32 crc kubenswrapper[4886]: I0129 17:28:32.616228 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:28:32 crc kubenswrapper[4886]: E0129 17:28:32.617305 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:28:33 crc kubenswrapper[4886]: I0129 17:28:33.729234 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:33 crc kubenswrapper[4886]: I0129 17:28:33.790001 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4wq2"] Jan 29 17:28:34 crc kubenswrapper[4886]: I0129 17:28:34.744496 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k4wq2" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="registry-server" containerID="cri-o://3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f" gracePeriod=2 Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.583080 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.614228 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96btq\" (UniqueName: \"kubernetes.io/projected/05cce123-7c5e-4254-b4af-53d0a93b2087-kube-api-access-96btq\") pod \"05cce123-7c5e-4254-b4af-53d0a93b2087\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.614366 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-catalog-content\") pod \"05cce123-7c5e-4254-b4af-53d0a93b2087\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.614511 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-utilities\") pod \"05cce123-7c5e-4254-b4af-53d0a93b2087\" (UID: \"05cce123-7c5e-4254-b4af-53d0a93b2087\") " Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.615418 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-utilities" (OuterVolumeSpecName: "utilities") pod "05cce123-7c5e-4254-b4af-53d0a93b2087" (UID: "05cce123-7c5e-4254-b4af-53d0a93b2087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.616533 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.641267 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05cce123-7c5e-4254-b4af-53d0a93b2087" (UID: "05cce123-7c5e-4254-b4af-53d0a93b2087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.651419 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cce123-7c5e-4254-b4af-53d0a93b2087-kube-api-access-96btq" (OuterVolumeSpecName: "kube-api-access-96btq") pod "05cce123-7c5e-4254-b4af-53d0a93b2087" (UID: "05cce123-7c5e-4254-b4af-53d0a93b2087"). InnerVolumeSpecName "kube-api-access-96btq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.719251 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96btq\" (UniqueName: \"kubernetes.io/projected/05cce123-7c5e-4254-b4af-53d0a93b2087-kube-api-access-96btq\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.719340 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cce123-7c5e-4254-b4af-53d0a93b2087-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.755486 4886 generic.go:334] "Generic (PLEG): container finished" podID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerID="3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f" exitCode=0 Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.755533 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerDied","Data":"3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f"} Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.755537 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4wq2" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.755559 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4wq2" event={"ID":"05cce123-7c5e-4254-b4af-53d0a93b2087","Type":"ContainerDied","Data":"61a0b584afbf6481ef8bc0dbb3bd55f9512c81522023b6b6c81f7237e81d868f"} Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.755579 4886 scope.go:117] "RemoveContainer" containerID="3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.779847 4886 scope.go:117] "RemoveContainer" containerID="253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.803545 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4wq2"] Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.814862 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4wq2"] Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.821413 4886 scope.go:117] "RemoveContainer" containerID="842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.852541 4886 scope.go:117] "RemoveContainer" containerID="3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f" Jan 29 17:28:35 crc kubenswrapper[4886]: E0129 17:28:35.853241 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f\": container with ID starting with 3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f not found: ID does not exist" containerID="3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.853277 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f"} err="failed to get container status \"3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f\": rpc error: code = NotFound desc = could not find container \"3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f\": container with ID starting with 3b75f4763fead7a66bbf159571598e4b767cc99f693fb214dbf9a681b5f9707f not found: ID does not exist" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.853301 4886 scope.go:117] "RemoveContainer" containerID="253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394" Jan 29 17:28:35 crc kubenswrapper[4886]: E0129 17:28:35.853695 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394\": container with ID starting with 253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394 not found: ID does not exist" containerID="253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.853726 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394"} err="failed to get container status \"253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394\": rpc error: code = NotFound desc = could not find container \"253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394\": container with ID starting with 253cc685146e10232d5ab9f70d1ede857a6248476c2faa279c39a0a3b167d394 not found: ID does not exist" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.853748 4886 scope.go:117] "RemoveContainer" containerID="842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf" Jan 29 17:28:35 crc kubenswrapper[4886]: E0129 17:28:35.854053 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf\": container with ID starting with 842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf not found: ID does not exist" containerID="842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.854081 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf"} err="failed to get container status \"842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf\": rpc error: code = NotFound desc = could not find container \"842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf\": container with ID starting with 842f59d01bbe3e85d057d0fd9d33f7e9337664f17faeae195f7a44ef00d411bf not found: ID does not exist" Jan 29 17:28:35 crc kubenswrapper[4886]: I0129 17:28:35.921565 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:36 crc kubenswrapper[4886]: I0129 17:28:36.207059 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cvqft" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="registry-server" probeResult="failure" output=< Jan 29 17:28:36 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:28:36 crc kubenswrapper[4886]: > Jan 29 17:28:36 crc kubenswrapper[4886]: I0129 17:28:36.630441 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" path="/var/lib/kubelet/pods/05cce123-7c5e-4254-b4af-53d0a93b2087/volumes" Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.176805 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ddvd"] Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.177058 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8ddvd" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="registry-server" containerID="cri-o://eb2c0e2ba022ed5bbe2b78ba5d991d3803db9fdfe00af6d0c6e96716e4b2a750" gracePeriod=2 Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.793553 4886 generic.go:334] "Generic (PLEG): container finished" podID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerID="eb2c0e2ba022ed5bbe2b78ba5d991d3803db9fdfe00af6d0c6e96716e4b2a750" exitCode=0 Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.793779 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerDied","Data":"eb2c0e2ba022ed5bbe2b78ba5d991d3803db9fdfe00af6d0c6e96716e4b2a750"} Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.794158 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ddvd" event={"ID":"35a75b14-10dc-482f-9b03-be71a8b0bfd4","Type":"ContainerDied","Data":"ae8c7924702c370ac40c8e4b953e0e47962503187d56a562a79b94f682fa85c7"} Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.794180 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8c7924702c370ac40c8e4b953e0e47962503187d56a562a79b94f682fa85c7" Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.835490 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.900179 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-utilities\") pod \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.900410 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5qz6\" (UniqueName: \"kubernetes.io/projected/35a75b14-10dc-482f-9b03-be71a8b0bfd4-kube-api-access-s5qz6\") pod \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.900586 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-catalog-content\") pod \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\" (UID: \"35a75b14-10dc-482f-9b03-be71a8b0bfd4\") " Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.901134 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-utilities" (OuterVolumeSpecName: "utilities") pod "35a75b14-10dc-482f-9b03-be71a8b0bfd4" (UID: "35a75b14-10dc-482f-9b03-be71a8b0bfd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.901801 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.909036 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a75b14-10dc-482f-9b03-be71a8b0bfd4-kube-api-access-s5qz6" (OuterVolumeSpecName: "kube-api-access-s5qz6") pod "35a75b14-10dc-482f-9b03-be71a8b0bfd4" (UID: "35a75b14-10dc-482f-9b03-be71a8b0bfd4"). InnerVolumeSpecName "kube-api-access-s5qz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:28:38 crc kubenswrapper[4886]: I0129 17:28:38.966495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a75b14-10dc-482f-9b03-be71a8b0bfd4" (UID: "35a75b14-10dc-482f-9b03-be71a8b0bfd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:28:39 crc kubenswrapper[4886]: I0129 17:28:39.003889 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5qz6\" (UniqueName: \"kubernetes.io/projected/35a75b14-10dc-482f-9b03-be71a8b0bfd4-kube-api-access-s5qz6\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:39 crc kubenswrapper[4886]: I0129 17:28:39.003937 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a75b14-10dc-482f-9b03-be71a8b0bfd4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:39 crc kubenswrapper[4886]: I0129 17:28:39.802993 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ddvd" Jan 29 17:28:39 crc kubenswrapper[4886]: I0129 17:28:39.842539 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ddvd"] Jan 29 17:28:39 crc kubenswrapper[4886]: I0129 17:28:39.854416 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8ddvd"] Jan 29 17:28:40 crc kubenswrapper[4886]: E0129 17:28:40.001068 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a75b14_10dc_482f_9b03_be71a8b0bfd4.slice/crio-ae8c7924702c370ac40c8e4b953e0e47962503187d56a562a79b94f682fa85c7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a75b14_10dc_482f_9b03_be71a8b0bfd4.slice\": RecentStats: unable to find data in memory cache]" Jan 29 17:28:40 crc kubenswrapper[4886]: I0129 17:28:40.628134 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" path="/var/lib/kubelet/pods/35a75b14-10dc-482f-9b03-be71a8b0bfd4/volumes" Jan 29 17:28:45 crc kubenswrapper[4886]: I0129 17:28:45.235128 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:28:45 crc kubenswrapper[4886]: I0129 17:28:45.288566 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:28:45 crc kubenswrapper[4886]: I0129 17:28:45.474218 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cvqft"] Jan 29 17:28:46 crc kubenswrapper[4886]: I0129 17:28:46.885807 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cvqft" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="registry-server" containerID="cri-o://4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3" gracePeriod=2 Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.550695 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.618372 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.618513 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-catalog-content\") pod \"bd300ccf-3376-4861-bcae-bf7e7310ab20\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.618730 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-utilities\") pod \"bd300ccf-3376-4861-bcae-bf7e7310ab20\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.618771 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgnp6\" (UniqueName: \"kubernetes.io/projected/bd300ccf-3376-4861-bcae-bf7e7310ab20-kube-api-access-wgnp6\") pod \"bd300ccf-3376-4861-bcae-bf7e7310ab20\" (UID: \"bd300ccf-3376-4861-bcae-bf7e7310ab20\") " Jan 29 17:28:47 crc kubenswrapper[4886]: E0129 17:28:47.618858 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.619592 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-utilities" (OuterVolumeSpecName: "utilities") pod "bd300ccf-3376-4861-bcae-bf7e7310ab20" (UID: "bd300ccf-3376-4861-bcae-bf7e7310ab20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.619747 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.627902 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd300ccf-3376-4861-bcae-bf7e7310ab20-kube-api-access-wgnp6" (OuterVolumeSpecName: "kube-api-access-wgnp6") pod "bd300ccf-3376-4861-bcae-bf7e7310ab20" (UID: "bd300ccf-3376-4861-bcae-bf7e7310ab20"). InnerVolumeSpecName "kube-api-access-wgnp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.722456 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgnp6\" (UniqueName: \"kubernetes.io/projected/bd300ccf-3376-4861-bcae-bf7e7310ab20-kube-api-access-wgnp6\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.763028 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd300ccf-3376-4861-bcae-bf7e7310ab20" (UID: "bd300ccf-3376-4861-bcae-bf7e7310ab20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.824766 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd300ccf-3376-4861-bcae-bf7e7310ab20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.897770 4886 generic.go:334] "Generic (PLEG): container finished" podID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerID="4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3" exitCode=0 Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.897810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerDied","Data":"4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3"} Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.897838 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvqft" event={"ID":"bd300ccf-3376-4861-bcae-bf7e7310ab20","Type":"ContainerDied","Data":"ac306b50644c0ba93e28d27ffe560102b7472fe91bb542b8c5a074fde1b9d833"} Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.897840 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvqft" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.897854 4886 scope.go:117] "RemoveContainer" containerID="4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.919697 4886 scope.go:117] "RemoveContainer" containerID="256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.933260 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cvqft"] Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.947988 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cvqft"] Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.954461 4886 scope.go:117] "RemoveContainer" containerID="2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad" Jan 29 17:28:47 crc kubenswrapper[4886]: I0129 17:28:47.999448 4886 scope.go:117] "RemoveContainer" containerID="4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3" Jan 29 17:28:47 crc kubenswrapper[4886]: E0129 17:28:47.999920 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3\": container with ID starting with 4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3 not found: ID does not exist" containerID="4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3" Jan 29 17:28:48 crc kubenswrapper[4886]: I0129 17:28:47.999966 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3"} err="failed to get container status \"4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3\": rpc error: code = NotFound desc = could not find container \"4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3\": container with ID starting with 4b5c3a0ee80d0412c7b91331bdd66750c33e8dcec79e419a2bfaa922a8aca1b3 not found: ID does not exist" Jan 29 17:28:48 crc kubenswrapper[4886]: I0129 17:28:47.999987 4886 scope.go:117] "RemoveContainer" containerID="256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f" Jan 29 17:28:48 crc kubenswrapper[4886]: E0129 17:28:48.000819 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f\": container with ID starting with 256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f not found: ID does not exist" containerID="256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f" Jan 29 17:28:48 crc kubenswrapper[4886]: I0129 17:28:48.001047 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f"} err="failed to get container status \"256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f\": rpc error: code = NotFound desc = could not find container \"256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f\": container with ID starting with 256d215b6bc8f6b4dd2d7a096efe29752d09fb6df76226c80283a55975a7751f not found: ID does not exist" Jan 29 17:28:48 crc kubenswrapper[4886]: I0129 17:28:48.001089 4886 scope.go:117] "RemoveContainer" containerID="2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad" Jan 29 17:28:48 crc kubenswrapper[4886]: E0129 17:28:48.001679 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad\": container with ID starting with 2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad not found: ID does not exist" containerID="2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad" Jan 29 17:28:48 crc kubenswrapper[4886]: I0129 17:28:48.001707 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad"} err="failed to get container status \"2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad\": rpc error: code = NotFound desc = could not find container \"2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad\": container with ID starting with 2946b5a7224cce9e100a708a0973e21f7be5d0a36fb81ad34a298fee9b955dad not found: ID does not exist" Jan 29 17:28:48 crc kubenswrapper[4886]: I0129 17:28:48.630881 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" path="/var/lib/kubelet/pods/bd300ccf-3376-4861-bcae-bf7e7310ab20/volumes" Jan 29 17:29:00 crc kubenswrapper[4886]: I0129 17:29:00.616229 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:29:00 crc kubenswrapper[4886]: E0129 17:29:00.617040 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:29:15 crc kubenswrapper[4886]: I0129 17:29:15.615886 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:29:15 crc kubenswrapper[4886]: E0129 17:29:15.616731 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:29:27 crc kubenswrapper[4886]: I0129 17:29:27.615667 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:29:27 crc kubenswrapper[4886]: E0129 17:29:27.616608 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:29:39 crc kubenswrapper[4886]: I0129 17:29:39.614747 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:29:40 crc kubenswrapper[4886]: I0129 17:29:40.543355 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"35b339594f7204cb48b198eeee2a9559b017a0c55878601a4de933a78b8a5a91"} Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.198821 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55"] Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200002 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200019 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200042 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200050 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200064 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200073 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200092 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200099 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200118 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200126 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200141 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200148 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200166 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200173 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200188 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200195 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: E0129 17:30:00.200213 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200219 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200486 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cce123-7c5e-4254-b4af-53d0a93b2087" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200518 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd300ccf-3376-4861-bcae-bf7e7310ab20" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.200550 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a75b14-10dc-482f-9b03-be71a8b0bfd4" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.201534 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.211476 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.212702 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.214123 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55"] Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.301934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-secret-volume\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.302118 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq48m\" (UniqueName: \"kubernetes.io/projected/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-kube-api-access-nq48m\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.302239 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-config-volume\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.404944 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-secret-volume\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.405072 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq48m\" (UniqueName: \"kubernetes.io/projected/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-kube-api-access-nq48m\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.405227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-config-volume\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.406481 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-config-volume\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.413764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-secret-volume\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.424872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq48m\" (UniqueName: \"kubernetes.io/projected/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-kube-api-access-nq48m\") pod \"collect-profiles-29495130-cdv55\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:00 crc kubenswrapper[4886]: I0129 17:30:00.544389 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:01 crc kubenswrapper[4886]: I0129 17:30:01.093192 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55"] Jan 29 17:30:01 crc kubenswrapper[4886]: I0129 17:30:01.810403 4886 generic.go:334] "Generic (PLEG): container finished" podID="d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" containerID="e970dea6a6e8251fa9ff24484a3f5ffaee4ce0d2fad251a5d786e848db7373be" exitCode=0 Jan 29 17:30:01 crc kubenswrapper[4886]: I0129 17:30:01.810449 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" event={"ID":"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281","Type":"ContainerDied","Data":"e970dea6a6e8251fa9ff24484a3f5ffaee4ce0d2fad251a5d786e848db7373be"} Jan 29 17:30:01 crc kubenswrapper[4886]: I0129 17:30:01.810790 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" event={"ID":"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281","Type":"ContainerStarted","Data":"9c259f168053361e77ed1c13731247842c1d5e0113f74a4d43cf2793fdb0de05"} Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.296980 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.425455 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-config-volume\") pod \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.425725 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq48m\" (UniqueName: \"kubernetes.io/projected/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-kube-api-access-nq48m\") pod \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.425856 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-secret-volume\") pod \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\" (UID: \"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281\") " Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.426923 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-config-volume" (OuterVolumeSpecName: "config-volume") pod "d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" (UID: "d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.431995 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" (UID: "d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.432608 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-kube-api-access-nq48m" (OuterVolumeSpecName: "kube-api-access-nq48m") pod "d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" (UID: "d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281"). InnerVolumeSpecName "kube-api-access-nq48m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.528684 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.528732 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq48m\" (UniqueName: \"kubernetes.io/projected/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-kube-api-access-nq48m\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.528748 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.840183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" event={"ID":"d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281","Type":"ContainerDied","Data":"9c259f168053361e77ed1c13731247842c1d5e0113f74a4d43cf2793fdb0de05"} Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.840228 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c259f168053361e77ed1c13731247842c1d5e0113f74a4d43cf2793fdb0de05" Jan 29 17:30:03 crc kubenswrapper[4886]: I0129 17:30:03.840300 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55" Jan 29 17:30:04 crc kubenswrapper[4886]: I0129 17:30:04.416346 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr"] Jan 29 17:30:04 crc kubenswrapper[4886]: I0129 17:30:04.429511 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-rzdqr"] Jan 29 17:30:04 crc kubenswrapper[4886]: I0129 17:30:04.640934 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a04871a-41ba-40fc-bfb0-ca8f308e9b01" path="/var/lib/kubelet/pods/0a04871a-41ba-40fc-bfb0-ca8f308e9b01/volumes" Jan 29 17:30:55 crc kubenswrapper[4886]: I0129 17:30:55.056388 4886 scope.go:117] "RemoveContainer" containerID="11c1455f9476b08d8f802dd75f2ecc6d25f6377ab593571ce7bee30aa00fa339" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.119412 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vs8cn"] Jan 29 17:31:56 crc kubenswrapper[4886]: E0129 17:31:56.120667 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" containerName="collect-profiles" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.120682 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" containerName="collect-profiles" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.120998 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" containerName="collect-profiles" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.123259 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.137188 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vs8cn"] Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.234558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-catalog-content\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.234747 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-utilities\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.234847 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/b15eadb0-03e5-432e-a2e4-3366698223ab-kube-api-access-flvwr\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.337368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-utilities\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.337488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/b15eadb0-03e5-432e-a2e4-3366698223ab-kube-api-access-flvwr\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.337593 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-catalog-content\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.337938 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-utilities\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.338027 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-catalog-content\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.371024 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/b15eadb0-03e5-432e-a2e4-3366698223ab-kube-api-access-flvwr\") pod \"community-operators-vs8cn\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:56 crc kubenswrapper[4886]: I0129 17:31:56.442814 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:31:57 crc kubenswrapper[4886]: I0129 17:31:57.053729 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vs8cn"] Jan 29 17:31:57 crc kubenswrapper[4886]: I0129 17:31:57.492605 4886 generic.go:334] "Generic (PLEG): container finished" podID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerID="f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082" exitCode=0 Jan 29 17:31:57 crc kubenswrapper[4886]: I0129 17:31:57.492665 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerDied","Data":"f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082"} Jan 29 17:31:57 crc kubenswrapper[4886]: I0129 17:31:57.492717 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerStarted","Data":"b4bf1e94f7b9b806b205169e73b11e9ec0ab7cca4c63148827d709cccafc8c46"} Jan 29 17:31:58 crc kubenswrapper[4886]: I0129 17:31:58.505644 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerStarted","Data":"eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3"} Jan 29 17:31:59 crc kubenswrapper[4886]: I0129 17:31:59.660975 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:31:59 crc kubenswrapper[4886]: I0129 17:31:59.661406 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:32:00 crc kubenswrapper[4886]: I0129 17:32:00.534419 4886 generic.go:334] "Generic (PLEG): container finished" podID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerID="eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3" exitCode=0 Jan 29 17:32:00 crc kubenswrapper[4886]: I0129 17:32:00.534618 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerDied","Data":"eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3"} Jan 29 17:32:01 crc kubenswrapper[4886]: I0129 17:32:01.548095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerStarted","Data":"0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e"} Jan 29 17:32:01 crc kubenswrapper[4886]: I0129 17:32:01.579943 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vs8cn" podStartSLOduration=1.953942748 podStartE2EDuration="5.579918453s" podCreationTimestamp="2026-01-29 17:31:56 +0000 UTC" firstStartedPulling="2026-01-29 17:31:57.494981886 +0000 UTC m=+4200.403701158" lastFinishedPulling="2026-01-29 17:32:01.120957561 +0000 UTC m=+4204.029676863" observedRunningTime="2026-01-29 17:32:01.568767286 +0000 UTC m=+4204.477486598" watchObservedRunningTime="2026-01-29 17:32:01.579918453 +0000 UTC m=+4204.488637735" Jan 29 17:32:06 crc kubenswrapper[4886]: I0129 17:32:06.443847 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:32:06 crc kubenswrapper[4886]: I0129 17:32:06.444454 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:32:06 crc kubenswrapper[4886]: I0129 17:32:06.518040 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:32:06 crc kubenswrapper[4886]: I0129 17:32:06.650154 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:32:06 crc kubenswrapper[4886]: I0129 17:32:06.755281 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vs8cn"] Jan 29 17:32:08 crc kubenswrapper[4886]: I0129 17:32:08.630436 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vs8cn" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="registry-server" containerID="cri-o://0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e" gracePeriod=2 Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.225870 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.289570 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-catalog-content\") pod \"b15eadb0-03e5-432e-a2e4-3366698223ab\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.289858 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-utilities\") pod \"b15eadb0-03e5-432e-a2e4-3366698223ab\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.290023 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/b15eadb0-03e5-432e-a2e4-3366698223ab-kube-api-access-flvwr\") pod \"b15eadb0-03e5-432e-a2e4-3366698223ab\" (UID: \"b15eadb0-03e5-432e-a2e4-3366698223ab\") " Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.290663 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-utilities" (OuterVolumeSpecName: "utilities") pod "b15eadb0-03e5-432e-a2e4-3366698223ab" (UID: "b15eadb0-03e5-432e-a2e4-3366698223ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.291234 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.297115 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15eadb0-03e5-432e-a2e4-3366698223ab-kube-api-access-flvwr" (OuterVolumeSpecName: "kube-api-access-flvwr") pod "b15eadb0-03e5-432e-a2e4-3366698223ab" (UID: "b15eadb0-03e5-432e-a2e4-3366698223ab"). InnerVolumeSpecName "kube-api-access-flvwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.346323 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b15eadb0-03e5-432e-a2e4-3366698223ab" (UID: "b15eadb0-03e5-432e-a2e4-3366698223ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.393506 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15eadb0-03e5-432e-a2e4-3366698223ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.393541 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/b15eadb0-03e5-432e-a2e4-3366698223ab-kube-api-access-flvwr\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.641392 4886 generic.go:334] "Generic (PLEG): container finished" podID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerID="0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e" exitCode=0 Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.641481 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vs8cn" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.641491 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerDied","Data":"0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e"} Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.641731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vs8cn" event={"ID":"b15eadb0-03e5-432e-a2e4-3366698223ab","Type":"ContainerDied","Data":"b4bf1e94f7b9b806b205169e73b11e9ec0ab7cca4c63148827d709cccafc8c46"} Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.641752 4886 scope.go:117] "RemoveContainer" containerID="0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.688972 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vs8cn"] Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.698765 4886 scope.go:117] "RemoveContainer" containerID="eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.719514 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vs8cn"] Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.731579 4886 scope.go:117] "RemoveContainer" containerID="f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.854792 4886 scope.go:117] "RemoveContainer" containerID="0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e" Jan 29 17:32:09 crc kubenswrapper[4886]: E0129 17:32:09.855146 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e\": container with ID starting with 0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e not found: ID does not exist" containerID="0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.855176 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e"} err="failed to get container status \"0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e\": rpc error: code = NotFound desc = could not find container \"0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e\": container with ID starting with 0df23cbc18a986a7161c7c8329ec1af48c1a7572d4c3627fe49cd88cdfaa2f8e not found: ID does not exist" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.855197 4886 scope.go:117] "RemoveContainer" containerID="eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3" Jan 29 17:32:09 crc kubenswrapper[4886]: E0129 17:32:09.855490 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3\": container with ID starting with eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3 not found: ID does not exist" containerID="eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.855543 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3"} err="failed to get container status \"eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3\": rpc error: code = NotFound desc = could not find container \"eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3\": container with ID starting with eb4c3a9407b847e7eb345a3881082eb392d458c7757612afe4499b64754972a3 not found: ID does not exist" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.855570 4886 scope.go:117] "RemoveContainer" containerID="f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082" Jan 29 17:32:09 crc kubenswrapper[4886]: E0129 17:32:09.855818 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082\": container with ID starting with f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082 not found: ID does not exist" containerID="f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082" Jan 29 17:32:09 crc kubenswrapper[4886]: I0129 17:32:09.855859 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082"} err="failed to get container status \"f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082\": rpc error: code = NotFound desc = could not find container \"f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082\": container with ID starting with f4c918fc85d3407db2aee5b3bc2331867912c942c0d4a6509d35d2d9bbbd8082 not found: ID does not exist" Jan 29 17:32:10 crc kubenswrapper[4886]: I0129 17:32:10.633413 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" path="/var/lib/kubelet/pods/b15eadb0-03e5-432e-a2e4-3366698223ab/volumes" Jan 29 17:32:29 crc kubenswrapper[4886]: I0129 17:32:29.661206 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:32:29 crc kubenswrapper[4886]: I0129 17:32:29.661802 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:32:59 crc kubenswrapper[4886]: I0129 17:32:59.661522 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:32:59 crc kubenswrapper[4886]: I0129 17:32:59.662091 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:32:59 crc kubenswrapper[4886]: I0129 17:32:59.662143 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:32:59 crc kubenswrapper[4886]: I0129 17:32:59.662828 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35b339594f7204cb48b198eeee2a9559b017a0c55878601a4de933a78b8a5a91"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:32:59 crc kubenswrapper[4886]: I0129 17:32:59.662887 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://35b339594f7204cb48b198eeee2a9559b017a0c55878601a4de933a78b8a5a91" gracePeriod=600 Jan 29 17:33:00 crc kubenswrapper[4886]: I0129 17:33:00.210149 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="35b339594f7204cb48b198eeee2a9559b017a0c55878601a4de933a78b8a5a91" exitCode=0 Jan 29 17:33:00 crc kubenswrapper[4886]: I0129 17:33:00.210239 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"35b339594f7204cb48b198eeee2a9559b017a0c55878601a4de933a78b8a5a91"} Jan 29 17:33:00 crc kubenswrapper[4886]: I0129 17:33:00.210533 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a"} Jan 29 17:33:00 crc kubenswrapper[4886]: I0129 17:33:00.210552 4886 scope.go:117] "RemoveContainer" containerID="55efff0568134497b8e6ea81a0b8b1f655f106780275cdcff4518a5bd8ee6d2b" Jan 29 17:34:55 crc kubenswrapper[4886]: I0129 17:34:55.464534 4886 scope.go:117] "RemoveContainer" containerID="642ddcf7d24f5ba4de7f4cfe5021d1c82bdda14e5ce39e790d42f342b92ed808" Jan 29 17:34:55 crc kubenswrapper[4886]: I0129 17:34:55.517230 4886 scope.go:117] "RemoveContainer" containerID="eb2c0e2ba022ed5bbe2b78ba5d991d3803db9fdfe00af6d0c6e96716e4b2a750" Jan 29 17:34:55 crc kubenswrapper[4886]: I0129 17:34:55.588080 4886 scope.go:117] "RemoveContainer" containerID="743f0e0c8bd0dfe8ae38c7f2d03a8981e74ea3dba06a6339a6bd917fe57aa8e9" Jan 29 17:35:29 crc kubenswrapper[4886]: I0129 17:35:29.661179 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:35:29 crc kubenswrapper[4886]: I0129 17:35:29.661856 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:35:59 crc kubenswrapper[4886]: I0129 17:35:59.661742 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:35:59 crc kubenswrapper[4886]: I0129 17:35:59.662509 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.660675 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.661090 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.661131 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.662001 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.662115 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" gracePeriod=600 Jan 29 17:36:29 crc kubenswrapper[4886]: E0129 17:36:29.784751 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.876867 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" exitCode=0 Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.876917 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a"} Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.876955 4886 scope.go:117] "RemoveContainer" containerID="35b339594f7204cb48b198eeee2a9559b017a0c55878601a4de933a78b8a5a91" Jan 29 17:36:29 crc kubenswrapper[4886]: I0129 17:36:29.879058 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:36:29 crc kubenswrapper[4886]: E0129 17:36:29.879890 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:36:42 crc kubenswrapper[4886]: I0129 17:36:42.615656 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:36:42 crc kubenswrapper[4886]: E0129 17:36:42.616491 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:36:54 crc kubenswrapper[4886]: I0129 17:36:54.623105 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:36:54 crc kubenswrapper[4886]: E0129 17:36:54.626531 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:37:07 crc kubenswrapper[4886]: I0129 17:37:07.617920 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:37:07 crc kubenswrapper[4886]: E0129 17:37:07.619179 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:37:18 crc kubenswrapper[4886]: I0129 17:37:18.627768 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:37:18 crc kubenswrapper[4886]: E0129 17:37:18.629084 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:37:31 crc kubenswrapper[4886]: I0129 17:37:31.616923 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:37:31 crc kubenswrapper[4886]: E0129 17:37:31.617664 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:37:42 crc kubenswrapper[4886]: I0129 17:37:42.621682 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:37:42 crc kubenswrapper[4886]: E0129 17:37:42.623232 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:37:56 crc kubenswrapper[4886]: I0129 17:37:56.621131 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:37:56 crc kubenswrapper[4886]: E0129 17:37:56.621924 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:38:07 crc kubenswrapper[4886]: I0129 17:38:07.616313 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:38:07 crc kubenswrapper[4886]: E0129 17:38:07.617551 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:38:19 crc kubenswrapper[4886]: I0129 17:38:19.617078 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:38:19 crc kubenswrapper[4886]: E0129 17:38:19.620919 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:38:31 crc kubenswrapper[4886]: I0129 17:38:31.615367 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:38:31 crc kubenswrapper[4886]: E0129 17:38:31.616195 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:38:43 crc kubenswrapper[4886]: I0129 17:38:43.615857 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:38:43 crc kubenswrapper[4886]: E0129 17:38:43.616650 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.691027 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7bw7c"] Jan 29 17:38:51 crc kubenswrapper[4886]: E0129 17:38:51.692167 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="extract-utilities" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.692182 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="extract-utilities" Jan 29 17:38:51 crc kubenswrapper[4886]: E0129 17:38:51.692194 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="registry-server" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.692199 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="registry-server" Jan 29 17:38:51 crc kubenswrapper[4886]: E0129 17:38:51.692216 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="extract-content" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.692222 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="extract-content" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.692463 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15eadb0-03e5-432e-a2e4-3366698223ab" containerName="registry-server" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.695484 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.715171 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bw7c"] Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.736991 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2ph\" (UniqueName: \"kubernetes.io/projected/c566a66d-f66d-457d-80eb-a0cf5bf4e013-kube-api-access-9d2ph\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.737094 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-catalog-content\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.737174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-utilities\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.839020 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2ph\" (UniqueName: \"kubernetes.io/projected/c566a66d-f66d-457d-80eb-a0cf5bf4e013-kube-api-access-9d2ph\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.839095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-catalog-content\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.839152 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-utilities\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.839630 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-utilities\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.839771 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-catalog-content\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:51 crc kubenswrapper[4886]: I0129 17:38:51.867014 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2ph\" (UniqueName: \"kubernetes.io/projected/c566a66d-f66d-457d-80eb-a0cf5bf4e013-kube-api-access-9d2ph\") pod \"redhat-operators-7bw7c\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:52 crc kubenswrapper[4886]: I0129 17:38:52.033305 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:38:52 crc kubenswrapper[4886]: I0129 17:38:52.568959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bw7c"] Jan 29 17:38:52 crc kubenswrapper[4886]: I0129 17:38:52.757145 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerStarted","Data":"31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace"} Jan 29 17:38:52 crc kubenswrapper[4886]: I0129 17:38:52.757388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerStarted","Data":"cab69af52cd3a4f3f325f6b78803a593e82fd270c10956a862ec4c1b3df6eb47"} Jan 29 17:38:53 crc kubenswrapper[4886]: I0129 17:38:53.774123 4886 generic.go:334] "Generic (PLEG): container finished" podID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerID="31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace" exitCode=0 Jan 29 17:38:53 crc kubenswrapper[4886]: I0129 17:38:53.774219 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerDied","Data":"31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace"} Jan 29 17:38:53 crc kubenswrapper[4886]: I0129 17:38:53.777934 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:38:53 crc kubenswrapper[4886]: E0129 17:38:53.917138 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:38:53 crc kubenswrapper[4886]: E0129 17:38:53.917370 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9d2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7bw7c_openshift-marketplace(c566a66d-f66d-457d-80eb-a0cf5bf4e013): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:38:53 crc kubenswrapper[4886]: E0129 17:38:53.918710 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:38:54 crc kubenswrapper[4886]: E0129 17:38:54.795709 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:38:58 crc kubenswrapper[4886]: I0129 17:38:58.624672 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:38:58 crc kubenswrapper[4886]: E0129 17:38:58.625358 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.180023 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqs8b"] Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.183966 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.194577 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqs8b"] Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.216551 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-catalog-content\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.216706 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4d4z\" (UniqueName: \"kubernetes.io/projected/d8da04de-c293-46ce-aeae-b2081be3c077-kube-api-access-q4d4z\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.216742 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-utilities\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.320304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-catalog-content\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.320464 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4d4z\" (UniqueName: \"kubernetes.io/projected/d8da04de-c293-46ce-aeae-b2081be3c077-kube-api-access-q4d4z\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.321113 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-utilities\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.321124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-catalog-content\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.320495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-utilities\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.346982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4d4z\" (UniqueName: \"kubernetes.io/projected/d8da04de-c293-46ce-aeae-b2081be3c077-kube-api-access-q4d4z\") pod \"redhat-marketplace-sqs8b\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:04 crc kubenswrapper[4886]: I0129 17:39:04.511649 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:39:05 crc kubenswrapper[4886]: I0129 17:39:05.052304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqs8b"] Jan 29 17:39:05 crc kubenswrapper[4886]: I0129 17:39:05.945460 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8da04de-c293-46ce-aeae-b2081be3c077" containerID="95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22" exitCode=0 Jan 29 17:39:05 crc kubenswrapper[4886]: I0129 17:39:05.945577 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerDied","Data":"95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22"} Jan 29 17:39:05 crc kubenswrapper[4886]: I0129 17:39:05.945731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerStarted","Data":"efefc164eab7dbbf5bc524a94050b180180d68604ff2396211c4fb6aee8d9fad"} Jan 29 17:39:06 crc kubenswrapper[4886]: E0129 17:39:06.090206 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:39:06 crc kubenswrapper[4886]: E0129 17:39:06.091065 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4d4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sqs8b_openshift-marketplace(d8da04de-c293-46ce-aeae-b2081be3c077): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:39:06 crc kubenswrapper[4886]: E0129 17:39:06.092449 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:39:06 crc kubenswrapper[4886]: E0129 17:39:06.961981 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:39:09 crc kubenswrapper[4886]: I0129 17:39:09.615510 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:39:09 crc kubenswrapper[4886]: E0129 17:39:09.616583 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:39:09 crc kubenswrapper[4886]: E0129 17:39:09.742816 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:39:09 crc kubenswrapper[4886]: E0129 17:39:09.743433 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9d2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7bw7c_openshift-marketplace(c566a66d-f66d-457d-80eb-a0cf5bf4e013): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:39:09 crc kubenswrapper[4886]: E0129 17:39:09.745416 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:39:20 crc kubenswrapper[4886]: E0129 17:39:20.621212 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:39:20 crc kubenswrapper[4886]: E0129 17:39:20.772186 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:39:20 crc kubenswrapper[4886]: E0129 17:39:20.772427 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4d4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sqs8b_openshift-marketplace(d8da04de-c293-46ce-aeae-b2081be3c077): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:39:20 crc kubenswrapper[4886]: E0129 17:39:20.774377 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:39:24 crc kubenswrapper[4886]: I0129 17:39:24.615880 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:39:24 crc kubenswrapper[4886]: E0129 17:39:24.617058 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:39:32 crc kubenswrapper[4886]: E0129 17:39:32.768577 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:39:32 crc kubenswrapper[4886]: E0129 17:39:32.769257 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9d2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7bw7c_openshift-marketplace(c566a66d-f66d-457d-80eb-a0cf5bf4e013): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:39:32 crc kubenswrapper[4886]: E0129 17:39:32.770534 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:39:34 crc kubenswrapper[4886]: E0129 17:39:34.618895 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:39:39 crc kubenswrapper[4886]: I0129 17:39:39.616158 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:39:39 crc kubenswrapper[4886]: E0129 17:39:39.616787 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:39:45 crc kubenswrapper[4886]: E0129 17:39:45.619067 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:39:49 crc kubenswrapper[4886]: E0129 17:39:49.750289 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:39:49 crc kubenswrapper[4886]: E0129 17:39:49.751096 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4d4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sqs8b_openshift-marketplace(d8da04de-c293-46ce-aeae-b2081be3c077): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:39:49 crc kubenswrapper[4886]: E0129 17:39:49.752749 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:39:54 crc kubenswrapper[4886]: I0129 17:39:54.617016 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:39:54 crc kubenswrapper[4886]: E0129 17:39:54.618191 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:39:58 crc kubenswrapper[4886]: E0129 17:39:58.632907 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:40:04 crc kubenswrapper[4886]: E0129 17:40:04.618417 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:40:09 crc kubenswrapper[4886]: I0129 17:40:09.615900 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:40:09 crc kubenswrapper[4886]: E0129 17:40:09.617225 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:40:09 crc kubenswrapper[4886]: E0129 17:40:09.619317 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.507857 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsjfd"] Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.511485 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.532926 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsjfd"] Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.619558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-utilities\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.619780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-catalog-content\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.619935 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxp8\" (UniqueName: \"kubernetes.io/projected/7ceed770-f253-4044-92f0-c8a07b89b621-kube-api-access-nlxp8\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.722451 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxp8\" (UniqueName: \"kubernetes.io/projected/7ceed770-f253-4044-92f0-c8a07b89b621-kube-api-access-nlxp8\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.722644 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-utilities\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.722783 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-catalog-content\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.723484 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-catalog-content\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.723926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-utilities\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.744193 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxp8\" (UniqueName: \"kubernetes.io/projected/7ceed770-f253-4044-92f0-c8a07b89b621-kube-api-access-nlxp8\") pod \"certified-operators-qsjfd\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:14 crc kubenswrapper[4886]: I0129 17:40:14.838297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 17:40:15 crc kubenswrapper[4886]: I0129 17:40:15.413090 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsjfd"] Jan 29 17:40:15 crc kubenswrapper[4886]: I0129 17:40:15.940095 4886 generic.go:334] "Generic (PLEG): container finished" podID="7ceed770-f253-4044-92f0-c8a07b89b621" containerID="bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b" exitCode=0 Jan 29 17:40:15 crc kubenswrapper[4886]: I0129 17:40:15.940166 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerDied","Data":"bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b"} Jan 29 17:40:15 crc kubenswrapper[4886]: I0129 17:40:15.940487 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerStarted","Data":"fb5b6b721dd0a2050f48ef0e26fac1871e4ba7b7b47b95e41a00c0852ef2c55b"} Jan 29 17:40:16 crc kubenswrapper[4886]: E0129 17:40:16.075802 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:40:16 crc kubenswrapper[4886]: E0129 17:40:16.076005 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:16 crc kubenswrapper[4886]: E0129 17:40:16.078041 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:40:16 crc kubenswrapper[4886]: E0129 17:40:16.954143 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:40:18 crc kubenswrapper[4886]: E0129 17:40:18.628021 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:40:21 crc kubenswrapper[4886]: I0129 17:40:21.615791 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:40:21 crc kubenswrapper[4886]: E0129 17:40:21.616374 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:40:22 crc kubenswrapper[4886]: E0129 17:40:22.741714 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:40:22 crc kubenswrapper[4886]: E0129 17:40:22.742466 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9d2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7bw7c_openshift-marketplace(c566a66d-f66d-457d-80eb-a0cf5bf4e013): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:22 crc kubenswrapper[4886]: E0129 17:40:22.744313 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:40:30 crc kubenswrapper[4886]: E0129 17:40:30.748523 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:40:30 crc kubenswrapper[4886]: E0129 17:40:30.749407 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:30 crc kubenswrapper[4886]: E0129 17:40:30.751558 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:40:31 crc kubenswrapper[4886]: E0129 17:40:31.740018 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:40:31 crc kubenswrapper[4886]: E0129 17:40:31.740659 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4d4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sqs8b_openshift-marketplace(d8da04de-c293-46ce-aeae-b2081be3c077): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:31 crc kubenswrapper[4886]: E0129 17:40:31.741894 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:40:34 crc kubenswrapper[4886]: I0129 17:40:34.616250 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:40:34 crc kubenswrapper[4886]: E0129 17:40:34.617841 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:40:35 crc kubenswrapper[4886]: E0129 17:40:35.618820 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:40:43 crc kubenswrapper[4886]: E0129 17:40:43.619076 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:40:46 crc kubenswrapper[4886]: E0129 17:40:46.620160 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:40:47 crc kubenswrapper[4886]: I0129 17:40:47.616174 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:40:47 crc kubenswrapper[4886]: E0129 17:40:47.616754 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:40:50 crc kubenswrapper[4886]: E0129 17:40:50.621397 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:40:54 crc kubenswrapper[4886]: E0129 17:40:54.754612 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:40:54 crc kubenswrapper[4886]: E0129 17:40:54.755367 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:54 crc kubenswrapper[4886]: E0129 17:40:54.756887 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:40:58 crc kubenswrapper[4886]: I0129 17:40:58.615426 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:40:58 crc kubenswrapper[4886]: E0129 17:40:58.616721 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:41:00 crc kubenswrapper[4886]: E0129 17:41:00.618187 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:41:04 crc kubenswrapper[4886]: E0129 17:41:04.619962 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:41:05 crc kubenswrapper[4886]: E0129 17:41:05.617481 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:41:13 crc kubenswrapper[4886]: I0129 17:41:13.615605 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:41:13 crc kubenswrapper[4886]: E0129 17:41:13.616928 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:41:14 crc kubenswrapper[4886]: E0129 17:41:14.620121 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:41:19 crc kubenswrapper[4886]: E0129 17:41:19.618914 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:41:19 crc kubenswrapper[4886]: E0129 17:41:19.619160 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:41:25 crc kubenswrapper[4886]: I0129 17:41:25.615982 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:41:25 crc kubenswrapper[4886]: E0129 17:41:25.617833 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:41:27 crc kubenswrapper[4886]: E0129 17:41:27.620478 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:41:30 crc kubenswrapper[4886]: E0129 17:41:30.619356 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:41:34 crc kubenswrapper[4886]: E0129 17:41:34.620620 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:41:38 crc kubenswrapper[4886]: I0129 17:41:38.628894 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:41:38 crc kubenswrapper[4886]: I0129 17:41:38.950529 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"08c1b8c3edabbeb571f6803cae251f6a7919758b2342154da4b61975a4b2aba4"} Jan 29 17:41:39 crc kubenswrapper[4886]: E0129 17:41:39.617533 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:41:43 crc kubenswrapper[4886]: E0129 17:41:43.750448 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:41:43 crc kubenswrapper[4886]: E0129 17:41:43.751740 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:43 crc kubenswrapper[4886]: E0129 17:41:43.752982 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:41:46 crc kubenswrapper[4886]: E0129 17:41:46.753473 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:41:46 crc kubenswrapper[4886]: E0129 17:41:46.754698 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9d2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7bw7c_openshift-marketplace(c566a66d-f66d-457d-80eb-a0cf5bf4e013): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:46 crc kubenswrapper[4886]: E0129 17:41:46.756066 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:41:54 crc kubenswrapper[4886]: E0129 17:41:54.623888 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:41:54 crc kubenswrapper[4886]: E0129 17:41:54.749934 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:41:54 crc kubenswrapper[4886]: E0129 17:41:54.750191 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4d4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sqs8b_openshift-marketplace(d8da04de-c293-46ce-aeae-b2081be3c077): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:54 crc kubenswrapper[4886]: E0129 17:41:54.751741 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:41:57 crc kubenswrapper[4886]: E0129 17:41:57.619686 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:41:59 crc kubenswrapper[4886]: I0129 17:41:59.765481 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="954d7d1e-fd92-4c83-87d8-87a1f866dbbe" containerName="galera" probeResult="failure" output="command timed out" Jan 29 17:41:59 crc kubenswrapper[4886]: I0129 17:41:59.766751 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="954d7d1e-fd92-4c83-87d8-87a1f866dbbe" containerName="galera" probeResult="failure" output="command timed out" Jan 29 17:42:05 crc kubenswrapper[4886]: E0129 17:42:05.621632 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:42:07 crc kubenswrapper[4886]: E0129 17:42:07.620220 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:42:08 crc kubenswrapper[4886]: E0129 17:42:08.645401 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:42:18 crc kubenswrapper[4886]: E0129 17:42:18.636450 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:42:19 crc kubenswrapper[4886]: E0129 17:42:19.619829 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:42:20 crc kubenswrapper[4886]: E0129 17:42:20.618504 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:42:32 crc kubenswrapper[4886]: E0129 17:42:32.618811 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:42:32 crc kubenswrapper[4886]: E0129 17:42:32.621599 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:42:34 crc kubenswrapper[4886]: E0129 17:42:34.618007 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:42:43 crc kubenswrapper[4886]: E0129 17:42:43.621551 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:42:45 crc kubenswrapper[4886]: E0129 17:42:45.618322 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:42:46 crc kubenswrapper[4886]: E0129 17:42:46.618865 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:42:54 crc kubenswrapper[4886]: E0129 17:42:54.619762 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:42:57 crc kubenswrapper[4886]: E0129 17:42:57.617498 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:43:00 crc kubenswrapper[4886]: E0129 17:43:00.616582 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:43:06 crc kubenswrapper[4886]: E0129 17:43:06.621045 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:43:12 crc kubenswrapper[4886]: E0129 17:43:12.622415 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:43:12 crc kubenswrapper[4886]: E0129 17:43:12.788594 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:43:12 crc kubenswrapper[4886]: E0129 17:43:12.788773 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:43:12 crc kubenswrapper[4886]: E0129 17:43:12.789989 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:43:17 crc kubenswrapper[4886]: E0129 17:43:17.618390 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:43:23 crc kubenswrapper[4886]: E0129 17:43:23.620913 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:43:25 crc kubenswrapper[4886]: E0129 17:43:25.618890 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:43:31 crc kubenswrapper[4886]: E0129 17:43:31.619082 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:43:36 crc kubenswrapper[4886]: E0129 17:43:36.620294 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:43:38 crc kubenswrapper[4886]: E0129 17:43:38.632221 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:43:42 crc kubenswrapper[4886]: E0129 17:43:42.621894 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:43:47 crc kubenswrapper[4886]: E0129 17:43:47.624855 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:43:51 crc kubenswrapper[4886]: E0129 17:43:51.618635 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:43:54 crc kubenswrapper[4886]: E0129 17:43:54.619366 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:43:59 crc kubenswrapper[4886]: E0129 17:43:59.619372 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:43:59 crc kubenswrapper[4886]: I0129 17:43:59.661733 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:43:59 crc kubenswrapper[4886]: I0129 17:43:59.661818 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:44:04 crc kubenswrapper[4886]: E0129 17:44:04.619979 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:44:06 crc kubenswrapper[4886]: E0129 17:44:06.619560 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:44:11 crc kubenswrapper[4886]: E0129 17:44:11.617930 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:44:15 crc kubenswrapper[4886]: E0129 17:44:15.619168 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:44:19 crc kubenswrapper[4886]: E0129 17:44:19.619235 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:44:22 crc kubenswrapper[4886]: E0129 17:44:22.621765 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:44:29 crc kubenswrapper[4886]: E0129 17:44:29.618680 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:44:29 crc kubenswrapper[4886]: I0129 17:44:29.661727 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:44:29 crc kubenswrapper[4886]: I0129 17:44:29.662158 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:44:31 crc kubenswrapper[4886]: I0129 17:44:31.631355 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:44:31 crc kubenswrapper[4886]: E0129 17:44:31.820492 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:44:31 crc kubenswrapper[4886]: E0129 17:44:31.820648 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9d2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7bw7c_openshift-marketplace(c566a66d-f66d-457d-80eb-a0cf5bf4e013): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:44:31 crc kubenswrapper[4886]: E0129 17:44:31.821940 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:44:34 crc kubenswrapper[4886]: E0129 17:44:34.639122 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:44:41 crc kubenswrapper[4886]: E0129 17:44:41.761632 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:44:41 crc kubenswrapper[4886]: E0129 17:44:41.762649 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4d4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sqs8b_openshift-marketplace(d8da04de-c293-46ce-aeae-b2081be3c077): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:44:41 crc kubenswrapper[4886]: E0129 17:44:41.764808 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:44:45 crc kubenswrapper[4886]: E0129 17:44:45.619610 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:44:47 crc kubenswrapper[4886]: E0129 17:44:47.617865 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:44:52 crc kubenswrapper[4886]: E0129 17:44:52.618360 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:44:59 crc kubenswrapper[4886]: E0129 17:44:59.619180 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:44:59 crc kubenswrapper[4886]: I0129 17:44:59.661314 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:44:59 crc kubenswrapper[4886]: I0129 17:44:59.661438 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:44:59 crc kubenswrapper[4886]: I0129 17:44:59.661504 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:44:59 crc kubenswrapper[4886]: I0129 17:44:59.662776 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08c1b8c3edabbeb571f6803cae251f6a7919758b2342154da4b61975a4b2aba4"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:44:59 crc kubenswrapper[4886]: I0129 17:44:59.662887 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://08c1b8c3edabbeb571f6803cae251f6a7919758b2342154da4b61975a4b2aba4" gracePeriod=600 Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.159553 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc"] Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.161927 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.168320 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.168439 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.178442 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc"] Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.190172 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-config-volume\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.190587 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-secret-volume\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.190785 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57n8\" (UniqueName: \"kubernetes.io/projected/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-kube-api-access-p57n8\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.292696 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p57n8\" (UniqueName: \"kubernetes.io/projected/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-kube-api-access-p57n8\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.293105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-config-volume\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.293253 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-secret-volume\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.294262 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-config-volume\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.301991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-secret-volume\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.314038 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p57n8\" (UniqueName: \"kubernetes.io/projected/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-kube-api-access-p57n8\") pod \"collect-profiles-29495145-zz7lc\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.515100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.591152 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="08c1b8c3edabbeb571f6803cae251f6a7919758b2342154da4b61975a4b2aba4" exitCode=0 Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.591201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"08c1b8c3edabbeb571f6803cae251f6a7919758b2342154da4b61975a4b2aba4"} Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.591231 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3"} Jan 29 17:45:00 crc kubenswrapper[4886]: I0129 17:45:00.591251 4886 scope.go:117] "RemoveContainer" containerID="4fb3b6296c9f652ca771a622cb99f2be698815449622de5c6a6f7a03eb63e93a" Jan 29 17:45:00 crc kubenswrapper[4886]: E0129 17:45:00.620603 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:45:01 crc kubenswrapper[4886]: I0129 17:45:01.133598 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc"] Jan 29 17:45:01 crc kubenswrapper[4886]: W0129 17:45:01.137562 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23a9dc7_9e89_4743_9e23_ca27f59fb5e2.slice/crio-94e0d2a7195faec04f61ebf925d4ee5488545ccc1559c385ba7bac3c04f5927e WatchSource:0}: Error finding container 94e0d2a7195faec04f61ebf925d4ee5488545ccc1559c385ba7bac3c04f5927e: Status 404 returned error can't find the container with id 94e0d2a7195faec04f61ebf925d4ee5488545ccc1559c385ba7bac3c04f5927e Jan 29 17:45:01 crc kubenswrapper[4886]: I0129 17:45:01.622360 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" event={"ID":"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2","Type":"ContainerStarted","Data":"2fcb97adc449db7399cd2957592ff329785589715e5cee0d9163a663d660a4ec"} Jan 29 17:45:01 crc kubenswrapper[4886]: I0129 17:45:01.622728 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" event={"ID":"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2","Type":"ContainerStarted","Data":"94e0d2a7195faec04f61ebf925d4ee5488545ccc1559c385ba7bac3c04f5927e"} Jan 29 17:45:01 crc kubenswrapper[4886]: I0129 17:45:01.657640 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" podStartSLOduration=1.657616215 podStartE2EDuration="1.657616215s" podCreationTimestamp="2026-01-29 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:45:01.647365965 +0000 UTC m=+4984.556085277" watchObservedRunningTime="2026-01-29 17:45:01.657616215 +0000 UTC m=+4984.566335497" Jan 29 17:45:02 crc kubenswrapper[4886]: I0129 17:45:02.680576 4886 generic.go:334] "Generic (PLEG): container finished" podID="b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" containerID="2fcb97adc449db7399cd2957592ff329785589715e5cee0d9163a663d660a4ec" exitCode=0 Jan 29 17:45:02 crc kubenswrapper[4886]: I0129 17:45:02.680787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" event={"ID":"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2","Type":"ContainerDied","Data":"2fcb97adc449db7399cd2957592ff329785589715e5cee0d9163a663d660a4ec"} Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.715128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" event={"ID":"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2","Type":"ContainerDied","Data":"94e0d2a7195faec04f61ebf925d4ee5488545ccc1559c385ba7bac3c04f5927e"} Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.715767 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e0d2a7195faec04f61ebf925d4ee5488545ccc1559c385ba7bac3c04f5927e" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.720066 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.829448 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-secret-volume\") pod \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.829803 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p57n8\" (UniqueName: \"kubernetes.io/projected/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-kube-api-access-p57n8\") pod \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.830119 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-config-volume\") pod \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\" (UID: \"b23a9dc7-9e89-4743-9e23-ca27f59fb5e2\") " Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.830628 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-config-volume" (OuterVolumeSpecName: "config-volume") pod "b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" (UID: "b23a9dc7-9e89-4743-9e23-ca27f59fb5e2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.831046 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.860424 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" (UID: "b23a9dc7-9e89-4743-9e23-ca27f59fb5e2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.861068 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-kube-api-access-p57n8" (OuterVolumeSpecName: "kube-api-access-p57n8") pod "b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" (UID: "b23a9dc7-9e89-4743-9e23-ca27f59fb5e2"). InnerVolumeSpecName "kube-api-access-p57n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.934376 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:04 crc kubenswrapper[4886]: I0129 17:45:04.934428 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p57n8\" (UniqueName: \"kubernetes.io/projected/b23a9dc7-9e89-4743-9e23-ca27f59fb5e2-kube-api-access-p57n8\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:05 crc kubenswrapper[4886]: I0129 17:45:05.729014 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-zz7lc" Jan 29 17:45:05 crc kubenswrapper[4886]: I0129 17:45:05.823626 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666"] Jan 29 17:45:05 crc kubenswrapper[4886]: I0129 17:45:05.835555 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wk666"] Jan 29 17:45:06 crc kubenswrapper[4886]: I0129 17:45:06.636983 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da2d212-de01-458b-9805-8eb21ed83324" path="/var/lib/kubelet/pods/3da2d212-de01-458b-9805-8eb21ed83324/volumes" Jan 29 17:45:07 crc kubenswrapper[4886]: E0129 17:45:07.617760 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:45:11 crc kubenswrapper[4886]: E0129 17:45:11.619961 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:45:13 crc kubenswrapper[4886]: E0129 17:45:13.619201 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:45:19 crc kubenswrapper[4886]: E0129 17:45:19.619493 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:45:23 crc kubenswrapper[4886]: E0129 17:45:23.619503 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:45:28 crc kubenswrapper[4886]: E0129 17:45:28.632583 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:45:33 crc kubenswrapper[4886]: E0129 17:45:33.618603 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:45:38 crc kubenswrapper[4886]: E0129 17:45:38.633173 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:45:41 crc kubenswrapper[4886]: E0129 17:45:41.617486 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:45:45 crc kubenswrapper[4886]: E0129 17:45:45.619517 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:45:52 crc kubenswrapper[4886]: E0129 17:45:52.618203 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:45:54 crc kubenswrapper[4886]: E0129 17:45:54.618291 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:45:55 crc kubenswrapper[4886]: I0129 17:45:55.916823 4886 scope.go:117] "RemoveContainer" containerID="3f2a5d53f1118cb99d6ac0f75863b8e8419b33babb29267642e06437ed3d61f8" Jan 29 17:45:59 crc kubenswrapper[4886]: E0129 17:45:59.619002 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:46:07 crc kubenswrapper[4886]: E0129 17:46:07.764668 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:46:07 crc kubenswrapper[4886]: E0129 17:46:07.765548 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:46:07 crc kubenswrapper[4886]: E0129 17:46:07.766834 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:46:08 crc kubenswrapper[4886]: E0129 17:46:08.633262 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:46:11 crc kubenswrapper[4886]: E0129 17:46:11.618748 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:46:18 crc kubenswrapper[4886]: E0129 17:46:18.635585 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:46:22 crc kubenswrapper[4886]: E0129 17:46:22.622441 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:46:22 crc kubenswrapper[4886]: E0129 17:46:22.623458 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:46:31 crc kubenswrapper[4886]: E0129 17:46:31.619898 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:46:34 crc kubenswrapper[4886]: E0129 17:46:34.619442 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:46:35 crc kubenswrapper[4886]: E0129 17:46:35.617945 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:46:44 crc kubenswrapper[4886]: E0129 17:46:44.619218 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:46:46 crc kubenswrapper[4886]: E0129 17:46:46.620625 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:46:48 crc kubenswrapper[4886]: E0129 17:46:48.633468 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:46:59 crc kubenswrapper[4886]: E0129 17:46:59.620766 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:47:00 crc kubenswrapper[4886]: E0129 17:47:00.621205 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:47:00 crc kubenswrapper[4886]: E0129 17:47:00.621292 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:47:11 crc kubenswrapper[4886]: E0129 17:47:11.617443 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:47:12 crc kubenswrapper[4886]: E0129 17:47:12.620413 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:47:13 crc kubenswrapper[4886]: E0129 17:47:13.617299 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:47:22 crc kubenswrapper[4886]: E0129 17:47:22.620190 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:47:25 crc kubenswrapper[4886]: E0129 17:47:25.617407 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:47:28 crc kubenswrapper[4886]: E0129 17:47:28.639768 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:47:29 crc kubenswrapper[4886]: I0129 17:47:29.660681 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:47:29 crc kubenswrapper[4886]: I0129 17:47:29.661563 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:47:37 crc kubenswrapper[4886]: E0129 17:47:37.618776 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:47:40 crc kubenswrapper[4886]: E0129 17:47:40.617808 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:47:40 crc kubenswrapper[4886]: E0129 17:47:40.617872 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:47:50 crc kubenswrapper[4886]: E0129 17:47:50.736802 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:47:52 crc kubenswrapper[4886]: E0129 17:47:52.623634 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:47:52 crc kubenswrapper[4886]: E0129 17:47:52.623653 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:47:59 crc kubenswrapper[4886]: I0129 17:47:59.660794 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:47:59 crc kubenswrapper[4886]: I0129 17:47:59.661763 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.425609 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llsc9"] Jan 29 17:48:03 crc kubenswrapper[4886]: E0129 17:48:03.429665 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" containerName="collect-profiles" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.429694 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" containerName="collect-profiles" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.434843 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23a9dc7-9e89-4743-9e23-ca27f59fb5e2" containerName="collect-profiles" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.438571 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.456005 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llsc9"] Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.537189 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-utilities\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.537694 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-catalog-content\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.537765 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hsm\" (UniqueName: \"kubernetes.io/projected/40bcd274-ae24-4057-aa88-40fd76936d1f-kube-api-access-r2hsm\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: E0129 17:48:03.618076 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.639779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-catalog-content\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.639877 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2hsm\" (UniqueName: \"kubernetes.io/projected/40bcd274-ae24-4057-aa88-40fd76936d1f-kube-api-access-r2hsm\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.639936 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-utilities\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.640649 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-catalog-content\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.640721 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-utilities\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.670248 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2hsm\" (UniqueName: \"kubernetes.io/projected/40bcd274-ae24-4057-aa88-40fd76936d1f-kube-api-access-r2hsm\") pod \"community-operators-llsc9\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:03 crc kubenswrapper[4886]: I0129 17:48:03.771983 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:48:04 crc kubenswrapper[4886]: I0129 17:48:04.343697 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llsc9"] Jan 29 17:48:04 crc kubenswrapper[4886]: E0129 17:48:04.616831 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:48:04 crc kubenswrapper[4886]: E0129 17:48:04.617598 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:48:05 crc kubenswrapper[4886]: I0129 17:48:05.082785 4886 generic.go:334] "Generic (PLEG): container finished" podID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerID="df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda" exitCode=0 Jan 29 17:48:05 crc kubenswrapper[4886]: I0129 17:48:05.082831 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerDied","Data":"df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda"} Jan 29 17:48:05 crc kubenswrapper[4886]: I0129 17:48:05.082861 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerStarted","Data":"a88bf7d409b3544d3199be1655f238f3723fa051005797691698fbfffff6a736"} Jan 29 17:48:05 crc kubenswrapper[4886]: E0129 17:48:05.259533 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:48:05 crc kubenswrapper[4886]: E0129 17:48:05.260106 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2hsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-llsc9_openshift-marketplace(40bcd274-ae24-4057-aa88-40fd76936d1f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:48:05 crc kubenswrapper[4886]: E0129 17:48:05.261589 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:48:06 crc kubenswrapper[4886]: E0129 17:48:06.099197 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:48:15 crc kubenswrapper[4886]: E0129 17:48:15.618839 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:48:17 crc kubenswrapper[4886]: E0129 17:48:17.619231 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:48:19 crc kubenswrapper[4886]: E0129 17:48:19.641860 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:48:20 crc kubenswrapper[4886]: E0129 17:48:20.765569 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:48:20 crc kubenswrapper[4886]: E0129 17:48:20.765745 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2hsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-llsc9_openshift-marketplace(40bcd274-ae24-4057-aa88-40fd76936d1f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:48:20 crc kubenswrapper[4886]: E0129 17:48:20.767006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:48:29 crc kubenswrapper[4886]: E0129 17:48:29.617466 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:48:29 crc kubenswrapper[4886]: I0129 17:48:29.661429 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:48:29 crc kubenswrapper[4886]: I0129 17:48:29.661501 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:48:29 crc kubenswrapper[4886]: I0129 17:48:29.661554 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:48:29 crc kubenswrapper[4886]: I0129 17:48:29.662717 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:48:29 crc kubenswrapper[4886]: I0129 17:48:29.662796 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" gracePeriod=600 Jan 29 17:48:29 crc kubenswrapper[4886]: E0129 17:48:29.812738 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:48:30 crc kubenswrapper[4886]: I0129 17:48:30.399872 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" exitCode=0 Jan 29 17:48:30 crc kubenswrapper[4886]: I0129 17:48:30.399920 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3"} Jan 29 17:48:30 crc kubenswrapper[4886]: I0129 17:48:30.399956 4886 scope.go:117] "RemoveContainer" containerID="08c1b8c3edabbeb571f6803cae251f6a7919758b2342154da4b61975a4b2aba4" Jan 29 17:48:30 crc kubenswrapper[4886]: I0129 17:48:30.400728 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:48:30 crc kubenswrapper[4886]: E0129 17:48:30.401026 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:48:32 crc kubenswrapper[4886]: E0129 17:48:32.621653 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:48:32 crc kubenswrapper[4886]: E0129 17:48:32.621696 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:48:33 crc kubenswrapper[4886]: E0129 17:48:33.620680 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:48:44 crc kubenswrapper[4886]: I0129 17:48:44.615967 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:48:44 crc kubenswrapper[4886]: E0129 17:48:44.617031 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:48:44 crc kubenswrapper[4886]: E0129 17:48:44.620057 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:48:44 crc kubenswrapper[4886]: E0129 17:48:44.620115 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:48:47 crc kubenswrapper[4886]: E0129 17:48:47.622149 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:48:48 crc kubenswrapper[4886]: E0129 17:48:48.851448 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:48:48 crc kubenswrapper[4886]: E0129 17:48:48.852178 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2hsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-llsc9_openshift-marketplace(40bcd274-ae24-4057-aa88-40fd76936d1f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:48:48 crc kubenswrapper[4886]: E0129 17:48:48.853408 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:48:55 crc kubenswrapper[4886]: I0129 17:48:55.616399 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:48:55 crc kubenswrapper[4886]: E0129 17:48:55.617799 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:48:55 crc kubenswrapper[4886]: E0129 17:48:55.619209 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:48:59 crc kubenswrapper[4886]: E0129 17:48:59.618068 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:49:00 crc kubenswrapper[4886]: E0129 17:49:00.620033 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:49:02 crc kubenswrapper[4886]: E0129 17:49:02.620761 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:49:09 crc kubenswrapper[4886]: I0129 17:49:09.616107 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:49:09 crc kubenswrapper[4886]: E0129 17:49:09.617542 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:49:09 crc kubenswrapper[4886]: E0129 17:49:09.618733 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:49:12 crc kubenswrapper[4886]: E0129 17:49:12.624652 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:49:15 crc kubenswrapper[4886]: E0129 17:49:15.619596 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:49:16 crc kubenswrapper[4886]: E0129 17:49:16.618005 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:49:22 crc kubenswrapper[4886]: I0129 17:49:22.615281 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:49:22 crc kubenswrapper[4886]: E0129 17:49:22.616456 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:49:23 crc kubenswrapper[4886]: E0129 17:49:23.617811 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:49:26 crc kubenswrapper[4886]: E0129 17:49:26.619124 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:49:27 crc kubenswrapper[4886]: E0129 17:49:27.617993 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" Jan 29 17:49:29 crc kubenswrapper[4886]: E0129 17:49:29.618508 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" Jan 29 17:49:34 crc kubenswrapper[4886]: E0129 17:49:34.618993 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" Jan 29 17:49:36 crc kubenswrapper[4886]: I0129 17:49:36.615552 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:49:36 crc kubenswrapper[4886]: E0129 17:49:36.616614 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:49:37 crc kubenswrapper[4886]: E0129 17:49:37.644477 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:49:39 crc kubenswrapper[4886]: I0129 17:49:39.618077 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:49:41 crc kubenswrapper[4886]: I0129 17:49:41.456256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerStarted","Data":"6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896"} Jan 29 17:49:42 crc kubenswrapper[4886]: I0129 17:49:42.466347 4886 generic.go:334] "Generic (PLEG): container finished" podID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerID="6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896" exitCode=0 Jan 29 17:49:42 crc kubenswrapper[4886]: I0129 17:49:42.466481 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerDied","Data":"6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896"} Jan 29 17:49:43 crc kubenswrapper[4886]: I0129 17:49:43.483033 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerStarted","Data":"89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce"} Jan 29 17:49:43 crc kubenswrapper[4886]: I0129 17:49:43.527538 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llsc9" podStartSLOduration=2.5076267249999997 podStartE2EDuration="1m40.52751156s" podCreationTimestamp="2026-01-29 17:48:03 +0000 UTC" firstStartedPulling="2026-01-29 17:48:05.085582003 +0000 UTC m=+5167.994301285" lastFinishedPulling="2026-01-29 17:49:43.105466808 +0000 UTC m=+5266.014186120" observedRunningTime="2026-01-29 17:49:43.512919877 +0000 UTC m=+5266.421639189" watchObservedRunningTime="2026-01-29 17:49:43.52751156 +0000 UTC m=+5266.436230872" Jan 29 17:49:43 crc kubenswrapper[4886]: I0129 17:49:43.772545 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:49:43 crc kubenswrapper[4886]: I0129 17:49:43.772595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:49:44 crc kubenswrapper[4886]: I0129 17:49:44.494144 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerStarted","Data":"048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0"} Jan 29 17:49:44 crc kubenswrapper[4886]: I0129 17:49:44.839218 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="registry-server" probeResult="failure" output=< Jan 29 17:49:44 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:49:44 crc kubenswrapper[4886]: > Jan 29 17:49:48 crc kubenswrapper[4886]: I0129 17:49:48.631852 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:49:48 crc kubenswrapper[4886]: E0129 17:49:48.633145 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:49:51 crc kubenswrapper[4886]: I0129 17:49:51.653584 4886 generic.go:334] "Generic (PLEG): container finished" podID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerID="048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0" exitCode=0 Jan 29 17:49:51 crc kubenswrapper[4886]: I0129 17:49:51.653705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerDied","Data":"048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0"} Jan 29 17:49:51 crc kubenswrapper[4886]: I0129 17:49:51.659158 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerStarted","Data":"d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7"} Jan 29 17:49:52 crc kubenswrapper[4886]: E0129 17:49:52.618003 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:49:53 crc kubenswrapper[4886]: I0129 17:49:53.855044 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:49:53 crc kubenswrapper[4886]: I0129 17:49:53.922709 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:49:54 crc kubenswrapper[4886]: I0129 17:49:54.105698 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llsc9"] Jan 29 17:49:54 crc kubenswrapper[4886]: I0129 17:49:54.751364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerStarted","Data":"bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba"} Jan 29 17:49:54 crc kubenswrapper[4886]: I0129 17:49:54.790277 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7bw7c" podStartSLOduration=3.95254472 podStartE2EDuration="11m3.790245022s" podCreationTimestamp="2026-01-29 17:38:51 +0000 UTC" firstStartedPulling="2026-01-29 17:38:53.777547776 +0000 UTC m=+4616.686267078" lastFinishedPulling="2026-01-29 17:49:53.615248098 +0000 UTC m=+5276.523967380" observedRunningTime="2026-01-29 17:49:54.773622811 +0000 UTC m=+5277.682342123" watchObservedRunningTime="2026-01-29 17:49:54.790245022 +0000 UTC m=+5277.698964334" Jan 29 17:49:55 crc kubenswrapper[4886]: I0129 17:49:55.764605 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8da04de-c293-46ce-aeae-b2081be3c077" containerID="d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7" exitCode=0 Jan 29 17:49:55 crc kubenswrapper[4886]: I0129 17:49:55.764698 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerDied","Data":"d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7"} Jan 29 17:49:55 crc kubenswrapper[4886]: I0129 17:49:55.765096 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-llsc9" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="registry-server" containerID="cri-o://89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce" gracePeriod=2 Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.386014 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.434870 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2hsm\" (UniqueName: \"kubernetes.io/projected/40bcd274-ae24-4057-aa88-40fd76936d1f-kube-api-access-r2hsm\") pod \"40bcd274-ae24-4057-aa88-40fd76936d1f\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.435216 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-catalog-content\") pod \"40bcd274-ae24-4057-aa88-40fd76936d1f\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.435251 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-utilities\") pod \"40bcd274-ae24-4057-aa88-40fd76936d1f\" (UID: \"40bcd274-ae24-4057-aa88-40fd76936d1f\") " Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.437121 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-utilities" (OuterVolumeSpecName: "utilities") pod "40bcd274-ae24-4057-aa88-40fd76936d1f" (UID: "40bcd274-ae24-4057-aa88-40fd76936d1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.455075 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40bcd274-ae24-4057-aa88-40fd76936d1f-kube-api-access-r2hsm" (OuterVolumeSpecName: "kube-api-access-r2hsm") pod "40bcd274-ae24-4057-aa88-40fd76936d1f" (UID: "40bcd274-ae24-4057-aa88-40fd76936d1f"). InnerVolumeSpecName "kube-api-access-r2hsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.495872 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40bcd274-ae24-4057-aa88-40fd76936d1f" (UID: "40bcd274-ae24-4057-aa88-40fd76936d1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.538010 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.538316 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bcd274-ae24-4057-aa88-40fd76936d1f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.538351 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2hsm\" (UniqueName: \"kubernetes.io/projected/40bcd274-ae24-4057-aa88-40fd76936d1f-kube-api-access-r2hsm\") on node \"crc\" DevicePath \"\"" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.774676 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerStarted","Data":"ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486"} Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.777189 4886 generic.go:334] "Generic (PLEG): container finished" podID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerID="89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce" exitCode=0 Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.777217 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerDied","Data":"89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce"} Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.777262 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llsc9" event={"ID":"40bcd274-ae24-4057-aa88-40fd76936d1f","Type":"ContainerDied","Data":"a88bf7d409b3544d3199be1655f238f3723fa051005797691698fbfffff6a736"} Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.777288 4886 scope.go:117] "RemoveContainer" containerID="89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.777306 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llsc9" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.799633 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqs8b" podStartSLOduration=2.549989153 podStartE2EDuration="10m52.799616773s" podCreationTimestamp="2026-01-29 17:39:04 +0000 UTC" firstStartedPulling="2026-01-29 17:39:05.948449894 +0000 UTC m=+4628.857169166" lastFinishedPulling="2026-01-29 17:49:56.198077514 +0000 UTC m=+5279.106796786" observedRunningTime="2026-01-29 17:49:56.794854238 +0000 UTC m=+5279.703573510" watchObservedRunningTime="2026-01-29 17:49:56.799616773 +0000 UTC m=+5279.708336035" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.802158 4886 scope.go:117] "RemoveContainer" containerID="6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.828679 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llsc9"] Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.830738 4886 scope.go:117] "RemoveContainer" containerID="df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.842241 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-llsc9"] Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.862865 4886 scope.go:117] "RemoveContainer" containerID="89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce" Jan 29 17:49:56 crc kubenswrapper[4886]: E0129 17:49:56.863348 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce\": container with ID starting with 89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce not found: ID does not exist" containerID="89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.863390 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce"} err="failed to get container status \"89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce\": rpc error: code = NotFound desc = could not find container \"89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce\": container with ID starting with 89e9dc84363622541fc28235465288f22f58590f88406e532aef6fc87edbacce not found: ID does not exist" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.863422 4886 scope.go:117] "RemoveContainer" containerID="6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896" Jan 29 17:49:56 crc kubenswrapper[4886]: E0129 17:49:56.863823 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896\": container with ID starting with 6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896 not found: ID does not exist" containerID="6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.863870 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896"} err="failed to get container status \"6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896\": rpc error: code = NotFound desc = could not find container \"6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896\": container with ID starting with 6b9ab19ac11d0ebafbaa4deb030a38d941beeb1b3864ce3572f358b5cd58f896 not found: ID does not exist" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.863899 4886 scope.go:117] "RemoveContainer" containerID="df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda" Jan 29 17:49:56 crc kubenswrapper[4886]: E0129 17:49:56.864165 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda\": container with ID starting with df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda not found: ID does not exist" containerID="df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda" Jan 29 17:49:56 crc kubenswrapper[4886]: I0129 17:49:56.864186 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda"} err="failed to get container status \"df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda\": rpc error: code = NotFound desc = could not find container \"df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda\": container with ID starting with df37f7c356fb768cbf7232ec3398b6f87349466aec6de2b10e5c22d7da6bdbda not found: ID does not exist" Jan 29 17:49:58 crc kubenswrapper[4886]: I0129 17:49:58.630676 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" path="/var/lib/kubelet/pods/40bcd274-ae24-4057-aa88-40fd76936d1f/volumes" Jan 29 17:50:02 crc kubenswrapper[4886]: I0129 17:50:02.034269 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:50:02 crc kubenswrapper[4886]: I0129 17:50:02.035035 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:50:03 crc kubenswrapper[4886]: I0129 17:50:03.109863 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="registry-server" probeResult="failure" output=< Jan 29 17:50:03 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:50:03 crc kubenswrapper[4886]: > Jan 29 17:50:03 crc kubenswrapper[4886]: I0129 17:50:03.615728 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:50:03 crc kubenswrapper[4886]: E0129 17:50:03.617223 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:50:03 crc kubenswrapper[4886]: E0129 17:50:03.617783 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:50:04 crc kubenswrapper[4886]: I0129 17:50:04.512892 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:50:04 crc kubenswrapper[4886]: I0129 17:50:04.512967 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:50:04 crc kubenswrapper[4886]: I0129 17:50:04.610936 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:50:04 crc kubenswrapper[4886]: I0129 17:50:04.966617 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:50:05 crc kubenswrapper[4886]: I0129 17:50:05.044093 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqs8b"] Jan 29 17:50:06 crc kubenswrapper[4886]: I0129 17:50:06.930274 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqs8b" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="registry-server" containerID="cri-o://ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486" gracePeriod=2 Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.692033 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.715603 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4d4z\" (UniqueName: \"kubernetes.io/projected/d8da04de-c293-46ce-aeae-b2081be3c077-kube-api-access-q4d4z\") pod \"d8da04de-c293-46ce-aeae-b2081be3c077\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.715662 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-utilities\") pod \"d8da04de-c293-46ce-aeae-b2081be3c077\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.715940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-catalog-content\") pod \"d8da04de-c293-46ce-aeae-b2081be3c077\" (UID: \"d8da04de-c293-46ce-aeae-b2081be3c077\") " Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.718026 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-utilities" (OuterVolumeSpecName: "utilities") pod "d8da04de-c293-46ce-aeae-b2081be3c077" (UID: "d8da04de-c293-46ce-aeae-b2081be3c077"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.739681 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8da04de-c293-46ce-aeae-b2081be3c077-kube-api-access-q4d4z" (OuterVolumeSpecName: "kube-api-access-q4d4z") pod "d8da04de-c293-46ce-aeae-b2081be3c077" (UID: "d8da04de-c293-46ce-aeae-b2081be3c077"). InnerVolumeSpecName "kube-api-access-q4d4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.749287 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8da04de-c293-46ce-aeae-b2081be3c077" (UID: "d8da04de-c293-46ce-aeae-b2081be3c077"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.819108 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.819137 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4d4z\" (UniqueName: \"kubernetes.io/projected/d8da04de-c293-46ce-aeae-b2081be3c077-kube-api-access-q4d4z\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.819147 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8da04de-c293-46ce-aeae-b2081be3c077-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.945575 4886 generic.go:334] "Generic (PLEG): container finished" podID="d8da04de-c293-46ce-aeae-b2081be3c077" containerID="ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486" exitCode=0 Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.945625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerDied","Data":"ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486"} Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.945653 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqs8b" event={"ID":"d8da04de-c293-46ce-aeae-b2081be3c077","Type":"ContainerDied","Data":"efefc164eab7dbbf5bc524a94050b180180d68604ff2396211c4fb6aee8d9fad"} Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.945670 4886 scope.go:117] "RemoveContainer" containerID="ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.945805 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqs8b" Jan 29 17:50:07 crc kubenswrapper[4886]: I0129 17:50:07.990309 4886 scope.go:117] "RemoveContainer" containerID="d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.001871 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqs8b"] Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.011693 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqs8b"] Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.014686 4886 scope.go:117] "RemoveContainer" containerID="95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.090457 4886 scope.go:117] "RemoveContainer" containerID="ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486" Jan 29 17:50:08 crc kubenswrapper[4886]: E0129 17:50:08.091129 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486\": container with ID starting with ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486 not found: ID does not exist" containerID="ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.091178 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486"} err="failed to get container status \"ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486\": rpc error: code = NotFound desc = could not find container \"ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486\": container with ID starting with ce0a05bd3d497a8a4069a0652d1a5685775d958d3b77b12a4fb2cd4858595486 not found: ID does not exist" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.091206 4886 scope.go:117] "RemoveContainer" containerID="d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7" Jan 29 17:50:08 crc kubenswrapper[4886]: E0129 17:50:08.091915 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7\": container with ID starting with d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7 not found: ID does not exist" containerID="d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.091951 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7"} err="failed to get container status \"d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7\": rpc error: code = NotFound desc = could not find container \"d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7\": container with ID starting with d74818ab52ae29443c5955bc1974c1dbb7212a33d4252b321985fe8bb4f905d7 not found: ID does not exist" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.091985 4886 scope.go:117] "RemoveContainer" containerID="95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22" Jan 29 17:50:08 crc kubenswrapper[4886]: E0129 17:50:08.092576 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22\": container with ID starting with 95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22 not found: ID does not exist" containerID="95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.092605 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22"} err="failed to get container status \"95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22\": rpc error: code = NotFound desc = could not find container \"95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22\": container with ID starting with 95fe5d5ec1cc0c1d3c6bdcb2b0f28f4b7f72e0b8cf33d409b80c3bfccdde3d22 not found: ID does not exist" Jan 29 17:50:08 crc kubenswrapper[4886]: I0129 17:50:08.631109 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" path="/var/lib/kubelet/pods/d8da04de-c293-46ce-aeae-b2081be3c077/volumes" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.267299 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lrwxm"] Jan 29 17:50:09 crc kubenswrapper[4886]: E0129 17:50:09.268006 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="extract-utilities" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268022 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="extract-utilities" Jan 29 17:50:09 crc kubenswrapper[4886]: E0129 17:50:09.268043 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="extract-utilities" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268051 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="extract-utilities" Jan 29 17:50:09 crc kubenswrapper[4886]: E0129 17:50:09.268078 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="extract-content" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268086 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="extract-content" Jan 29 17:50:09 crc kubenswrapper[4886]: E0129 17:50:09.268119 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="extract-content" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268127 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="extract-content" Jan 29 17:50:09 crc kubenswrapper[4886]: E0129 17:50:09.268138 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="registry-server" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268145 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="registry-server" Jan 29 17:50:09 crc kubenswrapper[4886]: E0129 17:50:09.268160 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="registry-server" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268167 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="registry-server" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268504 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8da04de-c293-46ce-aeae-b2081be3c077" containerName="registry-server" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.268522 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="40bcd274-ae24-4057-aa88-40fd76936d1f" containerName="registry-server" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.270764 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.309152 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrwxm"] Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.360565 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ph9l\" (UniqueName: \"kubernetes.io/projected/54c33a8c-623a-409c-8586-7b4c3c1c0510-kube-api-access-2ph9l\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.360697 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-utilities\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.361149 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-catalog-content\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.464287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-catalog-content\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.464628 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ph9l\" (UniqueName: \"kubernetes.io/projected/54c33a8c-623a-409c-8586-7b4c3c1c0510-kube-api-access-2ph9l\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.464727 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-utilities\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.464993 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-catalog-content\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:09 crc kubenswrapper[4886]: I0129 17:50:09.465278 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-utilities\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:10 crc kubenswrapper[4886]: I0129 17:50:10.263616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ph9l\" (UniqueName: \"kubernetes.io/projected/54c33a8c-623a-409c-8586-7b4c3c1c0510-kube-api-access-2ph9l\") pod \"redhat-marketplace-lrwxm\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:10 crc kubenswrapper[4886]: I0129 17:50:10.503541 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:11 crc kubenswrapper[4886]: I0129 17:50:11.119752 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrwxm"] Jan 29 17:50:12 crc kubenswrapper[4886]: I0129 17:50:12.037085 4886 generic.go:334] "Generic (PLEG): container finished" podID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerID="8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c" exitCode=0 Jan 29 17:50:12 crc kubenswrapper[4886]: I0129 17:50:12.037489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerDied","Data":"8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c"} Jan 29 17:50:12 crc kubenswrapper[4886]: I0129 17:50:12.037569 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerStarted","Data":"cef715b4d1f263de7fa710e1aeabb63fa13c49ab6f66feea0bfb2c6c3415b7ca"} Jan 29 17:50:13 crc kubenswrapper[4886]: I0129 17:50:13.048622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerStarted","Data":"a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a"} Jan 29 17:50:13 crc kubenswrapper[4886]: I0129 17:50:13.089957 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="registry-server" probeResult="failure" output=< Jan 29 17:50:13 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 17:50:13 crc kubenswrapper[4886]: > Jan 29 17:50:14 crc kubenswrapper[4886]: I0129 17:50:14.065832 4886 generic.go:334] "Generic (PLEG): container finished" podID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerID="a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a" exitCode=0 Jan 29 17:50:14 crc kubenswrapper[4886]: I0129 17:50:14.066056 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerDied","Data":"a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a"} Jan 29 17:50:15 crc kubenswrapper[4886]: I0129 17:50:15.084726 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerStarted","Data":"27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45"} Jan 29 17:50:17 crc kubenswrapper[4886]: I0129 17:50:17.616681 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:50:17 crc kubenswrapper[4886]: E0129 17:50:17.617954 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:50:18 crc kubenswrapper[4886]: E0129 17:50:18.635526 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:50:20 crc kubenswrapper[4886]: I0129 17:50:20.503848 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:20 crc kubenswrapper[4886]: I0129 17:50:20.504204 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:20 crc kubenswrapper[4886]: I0129 17:50:20.603938 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:20 crc kubenswrapper[4886]: I0129 17:50:20.650942 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lrwxm" podStartSLOduration=9.171257753 podStartE2EDuration="11.650914984s" podCreationTimestamp="2026-01-29 17:50:09 +0000 UTC" firstStartedPulling="2026-01-29 17:50:12.041390533 +0000 UTC m=+5294.950109825" lastFinishedPulling="2026-01-29 17:50:14.521047764 +0000 UTC m=+5297.429767056" observedRunningTime="2026-01-29 17:50:15.116718847 +0000 UTC m=+5298.025438119" watchObservedRunningTime="2026-01-29 17:50:20.650914984 +0000 UTC m=+5303.559634296" Jan 29 17:50:21 crc kubenswrapper[4886]: I0129 17:50:21.228360 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:21 crc kubenswrapper[4886]: I0129 17:50:21.276235 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrwxm"] Jan 29 17:50:22 crc kubenswrapper[4886]: I0129 17:50:22.548677 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:50:22 crc kubenswrapper[4886]: I0129 17:50:22.641340 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.174882 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lrwxm" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="registry-server" containerID="cri-o://27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45" gracePeriod=2 Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.263588 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bw7c"] Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.779361 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.884001 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ph9l\" (UniqueName: \"kubernetes.io/projected/54c33a8c-623a-409c-8586-7b4c3c1c0510-kube-api-access-2ph9l\") pod \"54c33a8c-623a-409c-8586-7b4c3c1c0510\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.884524 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-catalog-content\") pod \"54c33a8c-623a-409c-8586-7b4c3c1c0510\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.884676 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-utilities\") pod \"54c33a8c-623a-409c-8586-7b4c3c1c0510\" (UID: \"54c33a8c-623a-409c-8586-7b4c3c1c0510\") " Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.885700 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-utilities" (OuterVolumeSpecName: "utilities") pod "54c33a8c-623a-409c-8586-7b4c3c1c0510" (UID: "54c33a8c-623a-409c-8586-7b4c3c1c0510"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.891252 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c33a8c-623a-409c-8586-7b4c3c1c0510-kube-api-access-2ph9l" (OuterVolumeSpecName: "kube-api-access-2ph9l") pod "54c33a8c-623a-409c-8586-7b4c3c1c0510" (UID: "54c33a8c-623a-409c-8586-7b4c3c1c0510"). InnerVolumeSpecName "kube-api-access-2ph9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.908005 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54c33a8c-623a-409c-8586-7b4c3c1c0510" (UID: "54c33a8c-623a-409c-8586-7b4c3c1c0510"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.988601 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ph9l\" (UniqueName: \"kubernetes.io/projected/54c33a8c-623a-409c-8586-7b4c3c1c0510-kube-api-access-2ph9l\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.988656 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:23 crc kubenswrapper[4886]: I0129 17:50:23.988676 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c33a8c-623a-409c-8586-7b4c3c1c0510-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.185244 4886 generic.go:334] "Generic (PLEG): container finished" podID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerID="27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45" exitCode=0 Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.185351 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerDied","Data":"27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45"} Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.185445 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrwxm" event={"ID":"54c33a8c-623a-409c-8586-7b4c3c1c0510","Type":"ContainerDied","Data":"cef715b4d1f263de7fa710e1aeabb63fa13c49ab6f66feea0bfb2c6c3415b7ca"} Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.185475 4886 scope.go:117] "RemoveContainer" containerID="27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.185481 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7bw7c" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="registry-server" containerID="cri-o://bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba" gracePeriod=2 Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.185835 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrwxm" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.222029 4886 scope.go:117] "RemoveContainer" containerID="a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.227474 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrwxm"] Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.245667 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrwxm"] Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.265915 4886 scope.go:117] "RemoveContainer" containerID="8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.462113 4886 scope.go:117] "RemoveContainer" containerID="27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45" Jan 29 17:50:24 crc kubenswrapper[4886]: E0129 17:50:24.462677 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45\": container with ID starting with 27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45 not found: ID does not exist" containerID="27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.462730 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45"} err="failed to get container status \"27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45\": rpc error: code = NotFound desc = could not find container \"27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45\": container with ID starting with 27f046a674100ab834a75f639ec3d0dce4924491f8bfcffb76b533e0fac55c45 not found: ID does not exist" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.462760 4886 scope.go:117] "RemoveContainer" containerID="a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a" Jan 29 17:50:24 crc kubenswrapper[4886]: E0129 17:50:24.465800 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a\": container with ID starting with a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a not found: ID does not exist" containerID="a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.465858 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a"} err="failed to get container status \"a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a\": rpc error: code = NotFound desc = could not find container \"a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a\": container with ID starting with a6282dc7f559738f374de038d372b30a6cdc01fff3a49010d814fb2959bb189a not found: ID does not exist" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.465881 4886 scope.go:117] "RemoveContainer" containerID="8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c" Jan 29 17:50:24 crc kubenswrapper[4886]: E0129 17:50:24.466237 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c\": container with ID starting with 8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c not found: ID does not exist" containerID="8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.466304 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c"} err="failed to get container status \"8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c\": rpc error: code = NotFound desc = could not find container \"8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c\": container with ID starting with 8c89c20de1b6c1aa5e210e0a36da94a9f8bda518322088c323c4b12e10362b1c not found: ID does not exist" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.628248 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" path="/var/lib/kubelet/pods/54c33a8c-623a-409c-8586-7b4c3c1c0510/volumes" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.773689 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.925976 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d2ph\" (UniqueName: \"kubernetes.io/projected/c566a66d-f66d-457d-80eb-a0cf5bf4e013-kube-api-access-9d2ph\") pod \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.926093 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-catalog-content\") pod \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.926162 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-utilities\") pod \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\" (UID: \"c566a66d-f66d-457d-80eb-a0cf5bf4e013\") " Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.928225 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-utilities" (OuterVolumeSpecName: "utilities") pod "c566a66d-f66d-457d-80eb-a0cf5bf4e013" (UID: "c566a66d-f66d-457d-80eb-a0cf5bf4e013"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:24 crc kubenswrapper[4886]: I0129 17:50:24.941675 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c566a66d-f66d-457d-80eb-a0cf5bf4e013-kube-api-access-9d2ph" (OuterVolumeSpecName: "kube-api-access-9d2ph") pod "c566a66d-f66d-457d-80eb-a0cf5bf4e013" (UID: "c566a66d-f66d-457d-80eb-a0cf5bf4e013"). InnerVolumeSpecName "kube-api-access-9d2ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.029077 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d2ph\" (UniqueName: \"kubernetes.io/projected/c566a66d-f66d-457d-80eb-a0cf5bf4e013-kube-api-access-9d2ph\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.029113 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.103475 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c566a66d-f66d-457d-80eb-a0cf5bf4e013" (UID: "c566a66d-f66d-457d-80eb-a0cf5bf4e013"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.132014 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c566a66d-f66d-457d-80eb-a0cf5bf4e013-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.201232 4886 generic.go:334] "Generic (PLEG): container finished" podID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerID="bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba" exitCode=0 Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.201279 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerDied","Data":"bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba"} Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.201349 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bw7c" event={"ID":"c566a66d-f66d-457d-80eb-a0cf5bf4e013","Type":"ContainerDied","Data":"cab69af52cd3a4f3f325f6b78803a593e82fd270c10956a862ec4c1b3df6eb47"} Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.201347 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bw7c" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.201377 4886 scope.go:117] "RemoveContainer" containerID="bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.233639 4886 scope.go:117] "RemoveContainer" containerID="048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.250964 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bw7c"] Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.269243 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7bw7c"] Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.281230 4886 scope.go:117] "RemoveContainer" containerID="31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.304973 4886 scope.go:117] "RemoveContainer" containerID="bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba" Jan 29 17:50:25 crc kubenswrapper[4886]: E0129 17:50:25.305689 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba\": container with ID starting with bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba not found: ID does not exist" containerID="bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.305764 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba"} err="failed to get container status \"bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba\": rpc error: code = NotFound desc = could not find container \"bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba\": container with ID starting with bc86b5548a2f6b98575b342d99a002bfb0143807c9dd174f5af50b3baca239ba not found: ID does not exist" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.305809 4886 scope.go:117] "RemoveContainer" containerID="048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0" Jan 29 17:50:25 crc kubenswrapper[4886]: E0129 17:50:25.306347 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0\": container with ID starting with 048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0 not found: ID does not exist" containerID="048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.306395 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0"} err="failed to get container status \"048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0\": rpc error: code = NotFound desc = could not find container \"048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0\": container with ID starting with 048fbc3f19e9f2bb3a22233ff84755a02d78dda5d7adaf81250ada584b2655f0 not found: ID does not exist" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.306448 4886 scope.go:117] "RemoveContainer" containerID="31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace" Jan 29 17:50:25 crc kubenswrapper[4886]: E0129 17:50:25.306884 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace\": container with ID starting with 31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace not found: ID does not exist" containerID="31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace" Jan 29 17:50:25 crc kubenswrapper[4886]: I0129 17:50:25.307064 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace"} err="failed to get container status \"31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace\": rpc error: code = NotFound desc = could not find container \"31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace\": container with ID starting with 31280720311a3cf46c0d281650fde637fb00d0bd369f8b6e628ebaffb4d39ace not found: ID does not exist" Jan 29 17:50:26 crc kubenswrapper[4886]: I0129 17:50:26.633428 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" path="/var/lib/kubelet/pods/c566a66d-f66d-457d-80eb-a0cf5bf4e013/volumes" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.485718 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6tlwv"] Jan 29 17:50:27 crc kubenswrapper[4886]: E0129 17:50:27.487194 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="extract-utilities" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487244 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="extract-utilities" Jan 29 17:50:27 crc kubenswrapper[4886]: E0129 17:50:27.487288 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="registry-server" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487302 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="registry-server" Jan 29 17:50:27 crc kubenswrapper[4886]: E0129 17:50:27.487364 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="extract-utilities" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487378 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="extract-utilities" Jan 29 17:50:27 crc kubenswrapper[4886]: E0129 17:50:27.487426 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="extract-content" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487438 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="extract-content" Jan 29 17:50:27 crc kubenswrapper[4886]: E0129 17:50:27.487453 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="registry-server" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487464 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="registry-server" Jan 29 17:50:27 crc kubenswrapper[4886]: E0129 17:50:27.487493 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="extract-content" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487506 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="extract-content" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.487943 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c566a66d-f66d-457d-80eb-a0cf5bf4e013" containerName="registry-server" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.488009 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c33a8c-623a-409c-8586-7b4c3c1c0510" containerName="registry-server" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.492052 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.500007 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tlwv"] Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.632033 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h94ld\" (UniqueName: \"kubernetes.io/projected/23a03da1-7fa0-41f6-b906-4769ab664bc5-kube-api-access-h94ld\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.632382 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-catalog-content\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.632675 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-utilities\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.734933 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-utilities\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.735003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h94ld\" (UniqueName: \"kubernetes.io/projected/23a03da1-7fa0-41f6-b906-4769ab664bc5-kube-api-access-h94ld\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.735094 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-catalog-content\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.735544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-utilities\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.735696 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-catalog-content\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.754141 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h94ld\" (UniqueName: \"kubernetes.io/projected/23a03da1-7fa0-41f6-b906-4769ab664bc5-kube-api-access-h94ld\") pod \"redhat-operators-6tlwv\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:27 crc kubenswrapper[4886]: I0129 17:50:27.851656 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:28 crc kubenswrapper[4886]: I0129 17:50:28.334487 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tlwv"] Jan 29 17:50:29 crc kubenswrapper[4886]: I0129 17:50:29.260058 4886 generic.go:334] "Generic (PLEG): container finished" podID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerID="bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb" exitCode=0 Jan 29 17:50:29 crc kubenswrapper[4886]: I0129 17:50:29.260401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerDied","Data":"bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb"} Jan 29 17:50:29 crc kubenswrapper[4886]: I0129 17:50:29.260440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerStarted","Data":"a202a48d002122e515252ad53e71cce754e31eedf1ab1c6214ecb88c2058cfde"} Jan 29 17:50:30 crc kubenswrapper[4886]: I0129 17:50:30.617649 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:50:30 crc kubenswrapper[4886]: E0129 17:50:30.618271 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:50:31 crc kubenswrapper[4886]: I0129 17:50:31.281394 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerStarted","Data":"f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf"} Jan 29 17:50:32 crc kubenswrapper[4886]: E0129 17:50:32.620743 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:50:37 crc kubenswrapper[4886]: I0129 17:50:37.367237 4886 generic.go:334] "Generic (PLEG): container finished" podID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerID="f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf" exitCode=0 Jan 29 17:50:37 crc kubenswrapper[4886]: I0129 17:50:37.367352 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerDied","Data":"f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf"} Jan 29 17:50:38 crc kubenswrapper[4886]: I0129 17:50:38.382718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerStarted","Data":"8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c"} Jan 29 17:50:38 crc kubenswrapper[4886]: I0129 17:50:38.415855 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6tlwv" podStartSLOduration=2.634675891 podStartE2EDuration="11.415828356s" podCreationTimestamp="2026-01-29 17:50:27 +0000 UTC" firstStartedPulling="2026-01-29 17:50:29.263228123 +0000 UTC m=+5312.171947435" lastFinishedPulling="2026-01-29 17:50:38.044380608 +0000 UTC m=+5320.953099900" observedRunningTime="2026-01-29 17:50:38.410567557 +0000 UTC m=+5321.319286859" watchObservedRunningTime="2026-01-29 17:50:38.415828356 +0000 UTC m=+5321.324547668" Jan 29 17:50:41 crc kubenswrapper[4886]: I0129 17:50:41.615686 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:50:41 crc kubenswrapper[4886]: E0129 17:50:41.616814 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:50:47 crc kubenswrapper[4886]: E0129 17:50:47.622519 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:50:47 crc kubenswrapper[4886]: I0129 17:50:47.852887 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:47 crc kubenswrapper[4886]: I0129 17:50:47.852982 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:48 crc kubenswrapper[4886]: I0129 17:50:48.641067 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:48 crc kubenswrapper[4886]: I0129 17:50:48.699433 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:49 crc kubenswrapper[4886]: I0129 17:50:49.083364 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6tlwv"] Jan 29 17:50:50 crc kubenswrapper[4886]: I0129 17:50:50.539732 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6tlwv" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="registry-server" containerID="cri-o://8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c" gracePeriod=2 Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.154106 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.264129 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-catalog-content\") pod \"23a03da1-7fa0-41f6-b906-4769ab664bc5\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.264640 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h94ld\" (UniqueName: \"kubernetes.io/projected/23a03da1-7fa0-41f6-b906-4769ab664bc5-kube-api-access-h94ld\") pod \"23a03da1-7fa0-41f6-b906-4769ab664bc5\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.266527 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-utilities\") pod \"23a03da1-7fa0-41f6-b906-4769ab664bc5\" (UID: \"23a03da1-7fa0-41f6-b906-4769ab664bc5\") " Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.268677 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-utilities" (OuterVolumeSpecName: "utilities") pod "23a03da1-7fa0-41f6-b906-4769ab664bc5" (UID: "23a03da1-7fa0-41f6-b906-4769ab664bc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.271929 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a03da1-7fa0-41f6-b906-4769ab664bc5-kube-api-access-h94ld" (OuterVolumeSpecName: "kube-api-access-h94ld") pod "23a03da1-7fa0-41f6-b906-4769ab664bc5" (UID: "23a03da1-7fa0-41f6-b906-4769ab664bc5"). InnerVolumeSpecName "kube-api-access-h94ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.371225 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h94ld\" (UniqueName: \"kubernetes.io/projected/23a03da1-7fa0-41f6-b906-4769ab664bc5-kube-api-access-h94ld\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.371256 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.397224 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23a03da1-7fa0-41f6-b906-4769ab664bc5" (UID: "23a03da1-7fa0-41f6-b906-4769ab664bc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.476888 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23a03da1-7fa0-41f6-b906-4769ab664bc5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.550924 4886 generic.go:334] "Generic (PLEG): container finished" podID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerID="8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c" exitCode=0 Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.550965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerDied","Data":"8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c"} Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.550994 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tlwv" event={"ID":"23a03da1-7fa0-41f6-b906-4769ab664bc5","Type":"ContainerDied","Data":"a202a48d002122e515252ad53e71cce754e31eedf1ab1c6214ecb88c2058cfde"} Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.551009 4886 scope.go:117] "RemoveContainer" containerID="8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.551032 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tlwv" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.593991 4886 scope.go:117] "RemoveContainer" containerID="f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.616156 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6tlwv"] Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.620006 4886 scope.go:117] "RemoveContainer" containerID="bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.627924 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6tlwv"] Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.692770 4886 scope.go:117] "RemoveContainer" containerID="8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c" Jan 29 17:50:51 crc kubenswrapper[4886]: E0129 17:50:51.693427 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c\": container with ID starting with 8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c not found: ID does not exist" containerID="8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.693497 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c"} err="failed to get container status \"8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c\": rpc error: code = NotFound desc = could not find container \"8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c\": container with ID starting with 8df92aae9f8620b2d061beecbaf2f5bce72758e4828378f15af511a411ef6e6c not found: ID does not exist" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.693546 4886 scope.go:117] "RemoveContainer" containerID="f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf" Jan 29 17:50:51 crc kubenswrapper[4886]: E0129 17:50:51.694091 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf\": container with ID starting with f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf not found: ID does not exist" containerID="f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.694122 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf"} err="failed to get container status \"f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf\": rpc error: code = NotFound desc = could not find container \"f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf\": container with ID starting with f0631425ddab1323041e7ecf7489d9c47b65f44f8f52eae86f1730126b411aaf not found: ID does not exist" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.694141 4886 scope.go:117] "RemoveContainer" containerID="bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb" Jan 29 17:50:51 crc kubenswrapper[4886]: E0129 17:50:51.694672 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb\": container with ID starting with bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb not found: ID does not exist" containerID="bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb" Jan 29 17:50:51 crc kubenswrapper[4886]: I0129 17:50:51.694737 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb"} err="failed to get container status \"bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb\": rpc error: code = NotFound desc = could not find container \"bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb\": container with ID starting with bcc6a4ee143fa849dab16d564a7897d3593761bb8a9147e60f4b959298b059fb not found: ID does not exist" Jan 29 17:50:52 crc kubenswrapper[4886]: I0129 17:50:52.630342 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" path="/var/lib/kubelet/pods/23a03da1-7fa0-41f6-b906-4769ab664bc5/volumes" Jan 29 17:50:53 crc kubenswrapper[4886]: I0129 17:50:53.615615 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:50:53 crc kubenswrapper[4886]: E0129 17:50:53.616566 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:50:58 crc kubenswrapper[4886]: E0129 17:50:58.636869 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:51:07 crc kubenswrapper[4886]: I0129 17:51:07.616119 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:51:07 crc kubenswrapper[4886]: E0129 17:51:07.617173 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:51:13 crc kubenswrapper[4886]: E0129 17:51:13.761822 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:51:13 crc kubenswrapper[4886]: E0129 17:51:13.762460 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:51:13 crc kubenswrapper[4886]: E0129 17:51:13.763743 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:51:19 crc kubenswrapper[4886]: I0129 17:51:19.615416 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:51:19 crc kubenswrapper[4886]: E0129 17:51:19.616302 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:51:27 crc kubenswrapper[4886]: E0129 17:51:27.618964 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:51:32 crc kubenswrapper[4886]: I0129 17:51:32.615515 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:51:32 crc kubenswrapper[4886]: E0129 17:51:32.616540 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:51:38 crc kubenswrapper[4886]: E0129 17:51:38.631432 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:51:44 crc kubenswrapper[4886]: I0129 17:51:44.616507 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:51:44 crc kubenswrapper[4886]: E0129 17:51:44.617874 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:51:51 crc kubenswrapper[4886]: E0129 17:51:51.617816 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:51:57 crc kubenswrapper[4886]: I0129 17:51:57.617037 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:51:57 crc kubenswrapper[4886]: E0129 17:51:57.618500 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:52:03 crc kubenswrapper[4886]: E0129 17:52:03.620598 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:52:10 crc kubenswrapper[4886]: I0129 17:52:10.626064 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:52:10 crc kubenswrapper[4886]: E0129 17:52:10.629793 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:52:17 crc kubenswrapper[4886]: E0129 17:52:17.619465 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:52:23 crc kubenswrapper[4886]: I0129 17:52:23.620434 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:52:23 crc kubenswrapper[4886]: E0129 17:52:23.621732 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:52:28 crc kubenswrapper[4886]: E0129 17:52:28.632050 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:52:38 crc kubenswrapper[4886]: I0129 17:52:38.627917 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:52:38 crc kubenswrapper[4886]: E0129 17:52:38.629207 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:52:41 crc kubenswrapper[4886]: E0129 17:52:41.618806 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:52:51 crc kubenswrapper[4886]: I0129 17:52:51.616259 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:52:51 crc kubenswrapper[4886]: E0129 17:52:51.617442 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:52:54 crc kubenswrapper[4886]: E0129 17:52:54.620919 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:53:02 crc kubenswrapper[4886]: I0129 17:53:02.615643 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:53:02 crc kubenswrapper[4886]: E0129 17:53:02.616645 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:53:09 crc kubenswrapper[4886]: E0129 17:53:09.619564 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:53:16 crc kubenswrapper[4886]: I0129 17:53:16.615824 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:53:16 crc kubenswrapper[4886]: E0129 17:53:16.617608 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:53:23 crc kubenswrapper[4886]: E0129 17:53:23.620075 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:53:29 crc kubenswrapper[4886]: I0129 17:53:29.615349 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:53:29 crc kubenswrapper[4886]: E0129 17:53:29.615970 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 17:53:36 crc kubenswrapper[4886]: E0129 17:53:36.619857 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:53:42 crc kubenswrapper[4886]: I0129 17:53:42.614947 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:53:43 crc kubenswrapper[4886]: I0129 17:53:43.814960 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"fa1f6ca4f64abfca286935b5cea47f9bd94b19d5dd8d9a7d6d366866d5a4fa94"} Jan 29 17:53:51 crc kubenswrapper[4886]: E0129 17:53:51.618850 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:54:02 crc kubenswrapper[4886]: E0129 17:54:02.619956 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:54:16 crc kubenswrapper[4886]: E0129 17:54:16.619631 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:54:29 crc kubenswrapper[4886]: I0129 17:54:29.193867 4886 trace.go:236] Trace[2044250100]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (29-Jan-2026 17:54:28.184) (total time: 1009ms): Jan 29 17:54:29 crc kubenswrapper[4886]: Trace[2044250100]: [1.009713838s] [1.009713838s] END Jan 29 17:54:29 crc kubenswrapper[4886]: E0129 17:54:29.618662 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:54:40 crc kubenswrapper[4886]: E0129 17:54:40.622130 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:54:52 crc kubenswrapper[4886]: E0129 17:54:52.627982 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:55:03 crc kubenswrapper[4886]: E0129 17:55:03.619717 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:55:17 crc kubenswrapper[4886]: E0129 17:55:17.618022 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:55:31 crc kubenswrapper[4886]: E0129 17:55:31.618667 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:55:45 crc kubenswrapper[4886]: E0129 17:55:45.621199 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:55:58 crc kubenswrapper[4886]: E0129 17:55:58.633195 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:55:59 crc kubenswrapper[4886]: I0129 17:55:59.661434 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:55:59 crc kubenswrapper[4886]: I0129 17:55:59.661526 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:56:11 crc kubenswrapper[4886]: E0129 17:56:11.618266 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:56:24 crc kubenswrapper[4886]: I0129 17:56:24.620884 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:56:24 crc kubenswrapper[4886]: E0129 17:56:24.761270 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:56:24 crc kubenswrapper[4886]: E0129 17:56:24.761459 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qsjfd_openshift-marketplace(7ceed770-f253-4044-92f0-c8a07b89b621): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:56:24 crc kubenswrapper[4886]: E0129 17:56:24.762642 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:56:29 crc kubenswrapper[4886]: I0129 17:56:29.661117 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:56:29 crc kubenswrapper[4886]: I0129 17:56:29.661716 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:56:38 crc kubenswrapper[4886]: E0129 17:56:38.634555 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:56:49 crc kubenswrapper[4886]: E0129 17:56:49.617322 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:56:59 crc kubenswrapper[4886]: I0129 17:56:59.661389 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:56:59 crc kubenswrapper[4886]: I0129 17:56:59.661941 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:56:59 crc kubenswrapper[4886]: I0129 17:56:59.661991 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 17:56:59 crc kubenswrapper[4886]: I0129 17:56:59.662917 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa1f6ca4f64abfca286935b5cea47f9bd94b19d5dd8d9a7d6d366866d5a4fa94"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:56:59 crc kubenswrapper[4886]: I0129 17:56:59.662979 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://fa1f6ca4f64abfca286935b5cea47f9bd94b19d5dd8d9a7d6d366866d5a4fa94" gracePeriod=600 Jan 29 17:57:00 crc kubenswrapper[4886]: I0129 17:57:00.385914 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="fa1f6ca4f64abfca286935b5cea47f9bd94b19d5dd8d9a7d6d366866d5a4fa94" exitCode=0 Jan 29 17:57:00 crc kubenswrapper[4886]: I0129 17:57:00.386008 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"fa1f6ca4f64abfca286935b5cea47f9bd94b19d5dd8d9a7d6d366866d5a4fa94"} Jan 29 17:57:00 crc kubenswrapper[4886]: I0129 17:57:00.386307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a"} Jan 29 17:57:00 crc kubenswrapper[4886]: I0129 17:57:00.386367 4886 scope.go:117] "RemoveContainer" containerID="8f37486cd564f3c9ff31aeb674510c8a56e76898f95a0396c83ca3b24bffcac3" Jan 29 17:57:01 crc kubenswrapper[4886]: E0129 17:57:01.617357 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:57:14 crc kubenswrapper[4886]: E0129 17:57:14.620668 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:57:27 crc kubenswrapper[4886]: E0129 17:57:27.618681 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:57:41 crc kubenswrapper[4886]: E0129 17:57:41.618087 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:57:55 crc kubenswrapper[4886]: E0129 17:57:55.617752 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:58:10 crc kubenswrapper[4886]: E0129 17:58:10.617110 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:58:24 crc kubenswrapper[4886]: E0129 17:58:24.617720 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:58:38 crc kubenswrapper[4886]: E0129 17:58:38.638611 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:58:52 crc kubenswrapper[4886]: E0129 17:58:52.618895 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:59:05 crc kubenswrapper[4886]: E0129 17:59:05.617535 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:59:17 crc kubenswrapper[4886]: E0129 17:59:17.619974 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:59:28 crc kubenswrapper[4886]: E0129 17:59:28.635258 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:59:29 crc kubenswrapper[4886]: I0129 17:59:29.661131 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:59:29 crc kubenswrapper[4886]: I0129 17:59:29.661592 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:59:40 crc kubenswrapper[4886]: E0129 17:59:40.623968 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:59:51 crc kubenswrapper[4886]: E0129 17:59:51.618289 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 17:59:59 crc kubenswrapper[4886]: I0129 17:59:59.660631 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:59:59 crc kubenswrapper[4886]: I0129 17:59:59.661380 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.180701 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk"] Jan 29 18:00:00 crc kubenswrapper[4886]: E0129 18:00:00.181785 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="extract-content" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.181879 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="extract-content" Jan 29 18:00:00 crc kubenswrapper[4886]: E0129 18:00:00.181967 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="registry-server" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.182035 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="registry-server" Jan 29 18:00:00 crc kubenswrapper[4886]: E0129 18:00:00.182124 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="extract-utilities" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.182197 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="extract-utilities" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.182626 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a03da1-7fa0-41f6-b906-4769ab664bc5" containerName="registry-server" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.183735 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.193142 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk"] Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.216756 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.216765 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.221186 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5d488e-61ed-4dc1-b209-0d4c90eac204-config-volume\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.221392 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5d488e-61ed-4dc1-b209-0d4c90eac204-secret-volume\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.221467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8rn\" (UniqueName: \"kubernetes.io/projected/7d5d488e-61ed-4dc1-b209-0d4c90eac204-kube-api-access-dv8rn\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.323719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5d488e-61ed-4dc1-b209-0d4c90eac204-secret-volume\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.323813 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8rn\" (UniqueName: \"kubernetes.io/projected/7d5d488e-61ed-4dc1-b209-0d4c90eac204-kube-api-access-dv8rn\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.323932 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5d488e-61ed-4dc1-b209-0d4c90eac204-config-volume\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.325450 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5d488e-61ed-4dc1-b209-0d4c90eac204-config-volume\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.344202 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5d488e-61ed-4dc1-b209-0d4c90eac204-secret-volume\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.344753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8rn\" (UniqueName: \"kubernetes.io/projected/7d5d488e-61ed-4dc1-b209-0d4c90eac204-kube-api-access-dv8rn\") pod \"collect-profiles-29495160-89jtk\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:00 crc kubenswrapper[4886]: I0129 18:00:00.537788 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:01 crc kubenswrapper[4886]: I0129 18:00:01.090259 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk"] Jan 29 18:00:01 crc kubenswrapper[4886]: W0129 18:00:01.098820 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5d488e_61ed_4dc1_b209_0d4c90eac204.slice/crio-a95a2ce75d3c522077c1b1011fca397ce7bf0005a75c52b7823917a27fd7f83d WatchSource:0}: Error finding container a95a2ce75d3c522077c1b1011fca397ce7bf0005a75c52b7823917a27fd7f83d: Status 404 returned error can't find the container with id a95a2ce75d3c522077c1b1011fca397ce7bf0005a75c52b7823917a27fd7f83d Jan 29 18:00:01 crc kubenswrapper[4886]: I0129 18:00:01.607849 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" event={"ID":"7d5d488e-61ed-4dc1-b209-0d4c90eac204","Type":"ContainerStarted","Data":"a3a5dd3d496c3ee55fe9582f3f1eae23dc0082c3dc8857b9c954a351ccc720bb"} Jan 29 18:00:01 crc kubenswrapper[4886]: I0129 18:00:01.608204 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" event={"ID":"7d5d488e-61ed-4dc1-b209-0d4c90eac204","Type":"ContainerStarted","Data":"a95a2ce75d3c522077c1b1011fca397ce7bf0005a75c52b7823917a27fd7f83d"} Jan 29 18:00:02 crc kubenswrapper[4886]: E0129 18:00:02.616636 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:00:02 crc kubenswrapper[4886]: I0129 18:00:02.664549 4886 generic.go:334] "Generic (PLEG): container finished" podID="7d5d488e-61ed-4dc1-b209-0d4c90eac204" containerID="a3a5dd3d496c3ee55fe9582f3f1eae23dc0082c3dc8857b9c954a351ccc720bb" exitCode=0 Jan 29 18:00:02 crc kubenswrapper[4886]: I0129 18:00:02.664625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" event={"ID":"7d5d488e-61ed-4dc1-b209-0d4c90eac204","Type":"ContainerDied","Data":"a3a5dd3d496c3ee55fe9582f3f1eae23dc0082c3dc8857b9c954a351ccc720bb"} Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.133653 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.225108 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5d488e-61ed-4dc1-b209-0d4c90eac204-secret-volume\") pod \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.225284 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8rn\" (UniqueName: \"kubernetes.io/projected/7d5d488e-61ed-4dc1-b209-0d4c90eac204-kube-api-access-dv8rn\") pod \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.225574 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5d488e-61ed-4dc1-b209-0d4c90eac204-config-volume\") pod \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\" (UID: \"7d5d488e-61ed-4dc1-b209-0d4c90eac204\") " Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.226817 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5d488e-61ed-4dc1-b209-0d4c90eac204-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d5d488e-61ed-4dc1-b209-0d4c90eac204" (UID: "7d5d488e-61ed-4dc1-b209-0d4c90eac204"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.253662 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5d488e-61ed-4dc1-b209-0d4c90eac204-kube-api-access-dv8rn" (OuterVolumeSpecName: "kube-api-access-dv8rn") pod "7d5d488e-61ed-4dc1-b209-0d4c90eac204" (UID: "7d5d488e-61ed-4dc1-b209-0d4c90eac204"). InnerVolumeSpecName "kube-api-access-dv8rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.287056 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d5d488e-61ed-4dc1-b209-0d4c90eac204-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d5d488e-61ed-4dc1-b209-0d4c90eac204" (UID: "7d5d488e-61ed-4dc1-b209-0d4c90eac204"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.329202 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5d488e-61ed-4dc1-b209-0d4c90eac204-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.329229 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5d488e-61ed-4dc1-b209-0d4c90eac204-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.329239 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8rn\" (UniqueName: \"kubernetes.io/projected/7d5d488e-61ed-4dc1-b209-0d4c90eac204-kube-api-access-dv8rn\") on node \"crc\" DevicePath \"\"" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.692306 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" event={"ID":"7d5d488e-61ed-4dc1-b209-0d4c90eac204","Type":"ContainerDied","Data":"a95a2ce75d3c522077c1b1011fca397ce7bf0005a75c52b7823917a27fd7f83d"} Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.692387 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95a2ce75d3c522077c1b1011fca397ce7bf0005a75c52b7823917a27fd7f83d" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.692471 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495160-89jtk" Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.727309 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz"] Jan 29 18:00:04 crc kubenswrapper[4886]: I0129 18:00:04.739367 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-pkxcz"] Jan 29 18:00:06 crc kubenswrapper[4886]: I0129 18:00:06.634621 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875b9b50-c440-4567-b475-c890d3d5d713" path="/var/lib/kubelet/pods/875b9b50-c440-4567-b475-c890d3d5d713/volumes" Jan 29 18:00:17 crc kubenswrapper[4886]: E0129 18:00:17.618868 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.021681 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-58r66"] Jan 29 18:00:22 crc kubenswrapper[4886]: E0129 18:00:22.022853 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5d488e-61ed-4dc1-b209-0d4c90eac204" containerName="collect-profiles" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.022869 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5d488e-61ed-4dc1-b209-0d4c90eac204" containerName="collect-profiles" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.023150 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5d488e-61ed-4dc1-b209-0d4c90eac204" containerName="collect-profiles" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.025201 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.040886 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58r66"] Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.120676 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pksbf\" (UniqueName: \"kubernetes.io/projected/26d3617b-467f-42e7-b171-2652f60e856a-kube-api-access-pksbf\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.120737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-utilities\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.120913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-catalog-content\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.222618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-utilities\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.222786 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-catalog-content\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.222994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pksbf\" (UniqueName: \"kubernetes.io/projected/26d3617b-467f-42e7-b171-2652f60e856a-kube-api-access-pksbf\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.223834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-utilities\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.224135 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-catalog-content\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.243486 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pksbf\" (UniqueName: \"kubernetes.io/projected/26d3617b-467f-42e7-b171-2652f60e856a-kube-api-access-pksbf\") pod \"community-operators-58r66\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:22 crc kubenswrapper[4886]: I0129 18:00:22.369846 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:23 crc kubenswrapper[4886]: I0129 18:00:23.694060 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58r66"] Jan 29 18:00:23 crc kubenswrapper[4886]: I0129 18:00:23.924876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerStarted","Data":"35d7a80f0d4f24685099c1759ec7b05ca9d597f3a2a3871214d4945e075e4c55"} Jan 29 18:00:24 crc kubenswrapper[4886]: I0129 18:00:24.941472 4886 generic.go:334] "Generic (PLEG): container finished" podID="26d3617b-467f-42e7-b171-2652f60e856a" containerID="b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b" exitCode=0 Jan 29 18:00:24 crc kubenswrapper[4886]: I0129 18:00:24.941608 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerDied","Data":"b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b"} Jan 29 18:00:26 crc kubenswrapper[4886]: I0129 18:00:26.971595 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerStarted","Data":"ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148"} Jan 29 18:00:27 crc kubenswrapper[4886]: I0129 18:00:27.983009 4886 generic.go:334] "Generic (PLEG): container finished" podID="26d3617b-467f-42e7-b171-2652f60e856a" containerID="ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148" exitCode=0 Jan 29 18:00:27 crc kubenswrapper[4886]: I0129 18:00:27.983067 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerDied","Data":"ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148"} Jan 29 18:00:28 crc kubenswrapper[4886]: I0129 18:00:28.997549 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerStarted","Data":"e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f"} Jan 29 18:00:29 crc kubenswrapper[4886]: I0129 18:00:29.041622 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-58r66" podStartSLOduration=4.381319053 podStartE2EDuration="8.041600708s" podCreationTimestamp="2026-01-29 18:00:21 +0000 UTC" firstStartedPulling="2026-01-29 18:00:24.943826851 +0000 UTC m=+5907.852546133" lastFinishedPulling="2026-01-29 18:00:28.604108486 +0000 UTC m=+5911.512827788" observedRunningTime="2026-01-29 18:00:29.016393686 +0000 UTC m=+5911.925113028" watchObservedRunningTime="2026-01-29 18:00:29.041600708 +0000 UTC m=+5911.950319990" Jan 29 18:00:29 crc kubenswrapper[4886]: I0129 18:00:29.673370 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:00:29 crc kubenswrapper[4886]: I0129 18:00:29.673467 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:00:29 crc kubenswrapper[4886]: I0129 18:00:29.673541 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 18:00:29 crc kubenswrapper[4886]: I0129 18:00:29.685414 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 18:00:29 crc kubenswrapper[4886]: I0129 18:00:29.685542 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" gracePeriod=600 Jan 29 18:00:29 crc kubenswrapper[4886]: E0129 18:00:29.818636 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:00:30 crc kubenswrapper[4886]: I0129 18:00:30.011704 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" exitCode=0 Jan 29 18:00:30 crc kubenswrapper[4886]: I0129 18:00:30.011805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a"} Jan 29 18:00:30 crc kubenswrapper[4886]: I0129 18:00:30.011889 4886 scope.go:117] "RemoveContainer" containerID="fa1f6ca4f64abfca286935b5cea47f9bd94b19d5dd8d9a7d6d366866d5a4fa94" Jan 29 18:00:30 crc kubenswrapper[4886]: I0129 18:00:30.013122 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:00:30 crc kubenswrapper[4886]: E0129 18:00:30.013828 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:00:31 crc kubenswrapper[4886]: E0129 18:00:31.617772 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:00:32 crc kubenswrapper[4886]: I0129 18:00:32.370561 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:32 crc kubenswrapper[4886]: I0129 18:00:32.370670 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:33 crc kubenswrapper[4886]: I0129 18:00:33.614830 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-58r66" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="registry-server" probeResult="failure" output=< Jan 29 18:00:33 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 18:00:33 crc kubenswrapper[4886]: > Jan 29 18:00:41 crc kubenswrapper[4886]: I0129 18:00:41.616001 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:00:41 crc kubenswrapper[4886]: E0129 18:00:41.617068 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:00:42 crc kubenswrapper[4886]: I0129 18:00:42.426830 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:42 crc kubenswrapper[4886]: I0129 18:00:42.477364 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:42 crc kubenswrapper[4886]: I0129 18:00:42.686763 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58r66"] Jan 29 18:00:43 crc kubenswrapper[4886]: E0129 18:00:43.618929 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.188442 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-58r66" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="registry-server" containerID="cri-o://e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f" gracePeriod=2 Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.787231 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.909345 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-catalog-content\") pod \"26d3617b-467f-42e7-b171-2652f60e856a\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.909710 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-utilities\") pod \"26d3617b-467f-42e7-b171-2652f60e856a\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.909864 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pksbf\" (UniqueName: \"kubernetes.io/projected/26d3617b-467f-42e7-b171-2652f60e856a-kube-api-access-pksbf\") pod \"26d3617b-467f-42e7-b171-2652f60e856a\" (UID: \"26d3617b-467f-42e7-b171-2652f60e856a\") " Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.910440 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-utilities" (OuterVolumeSpecName: "utilities") pod "26d3617b-467f-42e7-b171-2652f60e856a" (UID: "26d3617b-467f-42e7-b171-2652f60e856a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.910647 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.922757 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d3617b-467f-42e7-b171-2652f60e856a-kube-api-access-pksbf" (OuterVolumeSpecName: "kube-api-access-pksbf") pod "26d3617b-467f-42e7-b171-2652f60e856a" (UID: "26d3617b-467f-42e7-b171-2652f60e856a"). InnerVolumeSpecName "kube-api-access-pksbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:00:44 crc kubenswrapper[4886]: I0129 18:00:44.987134 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26d3617b-467f-42e7-b171-2652f60e856a" (UID: "26d3617b-467f-42e7-b171-2652f60e856a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.012775 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pksbf\" (UniqueName: \"kubernetes.io/projected/26d3617b-467f-42e7-b171-2652f60e856a-kube-api-access-pksbf\") on node \"crc\" DevicePath \"\"" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.012817 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d3617b-467f-42e7-b171-2652f60e856a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.207092 4886 generic.go:334] "Generic (PLEG): container finished" podID="26d3617b-467f-42e7-b171-2652f60e856a" containerID="e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f" exitCode=0 Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.207178 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58r66" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.207200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerDied","Data":"e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f"} Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.209348 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58r66" event={"ID":"26d3617b-467f-42e7-b171-2652f60e856a","Type":"ContainerDied","Data":"35d7a80f0d4f24685099c1759ec7b05ca9d597f3a2a3871214d4945e075e4c55"} Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.209384 4886 scope.go:117] "RemoveContainer" containerID="e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.235960 4886 scope.go:117] "RemoveContainer" containerID="ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.275030 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58r66"] Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.284533 4886 scope.go:117] "RemoveContainer" containerID="b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.288139 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-58r66"] Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.330932 4886 scope.go:117] "RemoveContainer" containerID="e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f" Jan 29 18:00:45 crc kubenswrapper[4886]: E0129 18:00:45.334201 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f\": container with ID starting with e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f not found: ID does not exist" containerID="e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.334241 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f"} err="failed to get container status \"e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f\": rpc error: code = NotFound desc = could not find container \"e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f\": container with ID starting with e414a896b88f092e6432856ecb0c1b6b443cf48e31bdaac4980adf1ae5105d4f not found: ID does not exist" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.334273 4886 scope.go:117] "RemoveContainer" containerID="ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148" Jan 29 18:00:45 crc kubenswrapper[4886]: E0129 18:00:45.334926 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148\": container with ID starting with ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148 not found: ID does not exist" containerID="ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.334960 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148"} err="failed to get container status \"ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148\": rpc error: code = NotFound desc = could not find container \"ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148\": container with ID starting with ee399e1894cc907afbfb2f0a808f1ddd9a838c29f41e68661126447443043148 not found: ID does not exist" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.334995 4886 scope.go:117] "RemoveContainer" containerID="b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b" Jan 29 18:00:45 crc kubenswrapper[4886]: E0129 18:00:45.336293 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b\": container with ID starting with b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b not found: ID does not exist" containerID="b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b" Jan 29 18:00:45 crc kubenswrapper[4886]: I0129 18:00:45.336343 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b"} err="failed to get container status \"b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b\": rpc error: code = NotFound desc = could not find container \"b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b\": container with ID starting with b620a665001976e28d6625a514bd7e44772c65a9d80ded020ecca7162863f51b not found: ID does not exist" Jan 29 18:00:46 crc kubenswrapper[4886]: I0129 18:00:46.639782 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d3617b-467f-42e7-b171-2652f60e856a" path="/var/lib/kubelet/pods/26d3617b-467f-42e7-b171-2652f60e856a/volumes" Jan 29 18:00:54 crc kubenswrapper[4886]: E0129 18:00:54.620000 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:00:56 crc kubenswrapper[4886]: I0129 18:00:56.484509 4886 scope.go:117] "RemoveContainer" containerID="db3e3f16f0932c632a2ab1ffff0f92252979a66c9e52244934f9d97bdd89246b" Jan 29 18:00:56 crc kubenswrapper[4886]: I0129 18:00:56.615517 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:00:56 crc kubenswrapper[4886]: E0129 18:00:56.616282 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.179076 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495161-tqptf"] Jan 29 18:01:00 crc kubenswrapper[4886]: E0129 18:01:00.180846 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="extract-content" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.180881 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="extract-content" Jan 29 18:01:00 crc kubenswrapper[4886]: E0129 18:01:00.180953 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="extract-utilities" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.180968 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="extract-utilities" Jan 29 18:01:00 crc kubenswrapper[4886]: E0129 18:01:00.181034 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="registry-server" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.181051 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="registry-server" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.181513 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d3617b-467f-42e7-b171-2652f60e856a" containerName="registry-server" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.183428 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.192014 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495161-tqptf"] Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.365116 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-config-data\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.365170 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l694k\" (UniqueName: \"kubernetes.io/projected/62fe5584-12c8-4933-868d-bbb9e04f7bb3-kube-api-access-l694k\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.365307 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-fernet-keys\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.365421 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-combined-ca-bundle\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.468101 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-fernet-keys\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.468249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-combined-ca-bundle\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.468301 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-config-data\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.468362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l694k\" (UniqueName: \"kubernetes.io/projected/62fe5584-12c8-4933-868d-bbb9e04f7bb3-kube-api-access-l694k\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.477666 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-config-data\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.477959 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-combined-ca-bundle\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.478191 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-fernet-keys\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.499397 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l694k\" (UniqueName: \"kubernetes.io/projected/62fe5584-12c8-4933-868d-bbb9e04f7bb3-kube-api-access-l694k\") pod \"keystone-cron-29495161-tqptf\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:00 crc kubenswrapper[4886]: I0129 18:01:00.509318 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:01 crc kubenswrapper[4886]: I0129 18:01:01.053187 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495161-tqptf"] Jan 29 18:01:01 crc kubenswrapper[4886]: W0129 18:01:01.053966 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62fe5584_12c8_4933_868d_bbb9e04f7bb3.slice/crio-b41604ad0f6d67f9c35086ac556ef6beebd1f2ec8853782909cd8f19ef4fd03a WatchSource:0}: Error finding container b41604ad0f6d67f9c35086ac556ef6beebd1f2ec8853782909cd8f19ef4fd03a: Status 404 returned error can't find the container with id b41604ad0f6d67f9c35086ac556ef6beebd1f2ec8853782909cd8f19ef4fd03a Jan 29 18:01:01 crc kubenswrapper[4886]: I0129 18:01:01.409600 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495161-tqptf" event={"ID":"62fe5584-12c8-4933-868d-bbb9e04f7bb3","Type":"ContainerStarted","Data":"f20811ea62519d50e5ec92d004c06f490a5ae492a283aa90b258514105a668e0"} Jan 29 18:01:01 crc kubenswrapper[4886]: I0129 18:01:01.409999 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495161-tqptf" event={"ID":"62fe5584-12c8-4933-868d-bbb9e04f7bb3","Type":"ContainerStarted","Data":"b41604ad0f6d67f9c35086ac556ef6beebd1f2ec8853782909cd8f19ef4fd03a"} Jan 29 18:01:01 crc kubenswrapper[4886]: I0129 18:01:01.435596 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495161-tqptf" podStartSLOduration=1.435575421 podStartE2EDuration="1.435575421s" podCreationTimestamp="2026-01-29 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 18:01:01.428058629 +0000 UTC m=+5944.336777901" watchObservedRunningTime="2026-01-29 18:01:01.435575421 +0000 UTC m=+5944.344294693" Jan 29 18:01:05 crc kubenswrapper[4886]: I0129 18:01:05.454761 4886 generic.go:334] "Generic (PLEG): container finished" podID="62fe5584-12c8-4933-868d-bbb9e04f7bb3" containerID="f20811ea62519d50e5ec92d004c06f490a5ae492a283aa90b258514105a668e0" exitCode=0 Jan 29 18:01:05 crc kubenswrapper[4886]: I0129 18:01:05.454823 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495161-tqptf" event={"ID":"62fe5584-12c8-4933-868d-bbb9e04f7bb3","Type":"ContainerDied","Data":"f20811ea62519d50e5ec92d004c06f490a5ae492a283aa90b258514105a668e0"} Jan 29 18:01:06 crc kubenswrapper[4886]: I0129 18:01:06.955303 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.099492 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-combined-ca-bundle\") pod \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.099636 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-config-data\") pod \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.099686 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-fernet-keys\") pod \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.099804 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l694k\" (UniqueName: \"kubernetes.io/projected/62fe5584-12c8-4933-868d-bbb9e04f7bb3-kube-api-access-l694k\") pod \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\" (UID: \"62fe5584-12c8-4933-868d-bbb9e04f7bb3\") " Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.106634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "62fe5584-12c8-4933-868d-bbb9e04f7bb3" (UID: "62fe5584-12c8-4933-868d-bbb9e04f7bb3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.106704 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fe5584-12c8-4933-868d-bbb9e04f7bb3-kube-api-access-l694k" (OuterVolumeSpecName: "kube-api-access-l694k") pod "62fe5584-12c8-4933-868d-bbb9e04f7bb3" (UID: "62fe5584-12c8-4933-868d-bbb9e04f7bb3"). InnerVolumeSpecName "kube-api-access-l694k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.137883 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62fe5584-12c8-4933-868d-bbb9e04f7bb3" (UID: "62fe5584-12c8-4933-868d-bbb9e04f7bb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.181492 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-config-data" (OuterVolumeSpecName: "config-data") pod "62fe5584-12c8-4933-868d-bbb9e04f7bb3" (UID: "62fe5584-12c8-4933-868d-bbb9e04f7bb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.203260 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.203295 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.203308 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l694k\" (UniqueName: \"kubernetes.io/projected/62fe5584-12c8-4933-868d-bbb9e04f7bb3-kube-api-access-l694k\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.203336 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5584-12c8-4933-868d-bbb9e04f7bb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.483101 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495161-tqptf" event={"ID":"62fe5584-12c8-4933-868d-bbb9e04f7bb3","Type":"ContainerDied","Data":"b41604ad0f6d67f9c35086ac556ef6beebd1f2ec8853782909cd8f19ef4fd03a"} Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.483154 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b41604ad0f6d67f9c35086ac556ef6beebd1f2ec8853782909cd8f19ef4fd03a" Jan 29 18:01:07 crc kubenswrapper[4886]: I0129 18:01:07.483169 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495161-tqptf" Jan 29 18:01:08 crc kubenswrapper[4886]: I0129 18:01:08.623674 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:01:08 crc kubenswrapper[4886]: E0129 18:01:08.624464 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:01:09 crc kubenswrapper[4886]: E0129 18:01:09.619689 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:01:20 crc kubenswrapper[4886]: I0129 18:01:20.615095 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:01:20 crc kubenswrapper[4886]: E0129 18:01:20.616057 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:01:22 crc kubenswrapper[4886]: E0129 18:01:22.620006 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" Jan 29 18:01:31 crc kubenswrapper[4886]: I0129 18:01:31.615899 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:01:31 crc kubenswrapper[4886]: E0129 18:01:31.616991 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:01:34 crc kubenswrapper[4886]: I0129 18:01:34.626503 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 18:01:35 crc kubenswrapper[4886]: I0129 18:01:35.821283 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerStarted","Data":"a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86"} Jan 29 18:01:37 crc kubenswrapper[4886]: I0129 18:01:37.848967 4886 generic.go:334] "Generic (PLEG): container finished" podID="7ceed770-f253-4044-92f0-c8a07b89b621" containerID="a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86" exitCode=0 Jan 29 18:01:37 crc kubenswrapper[4886]: I0129 18:01:37.849080 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerDied","Data":"a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86"} Jan 29 18:01:38 crc kubenswrapper[4886]: I0129 18:01:38.861468 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerStarted","Data":"57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a"} Jan 29 18:01:42 crc kubenswrapper[4886]: I0129 18:01:42.615243 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:01:42 crc kubenswrapper[4886]: E0129 18:01:42.616064 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:01:44 crc kubenswrapper[4886]: I0129 18:01:44.839218 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 18:01:44 crc kubenswrapper[4886]: I0129 18:01:44.839589 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 18:01:44 crc kubenswrapper[4886]: I0129 18:01:44.903078 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 18:01:44 crc kubenswrapper[4886]: I0129 18:01:44.929498 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsjfd" podStartSLOduration=8.561621441 podStartE2EDuration="21m30.929477222s" podCreationTimestamp="2026-01-29 17:40:14 +0000 UTC" firstStartedPulling="2026-01-29 17:40:15.942749248 +0000 UTC m=+4698.851468530" lastFinishedPulling="2026-01-29 18:01:38.310604999 +0000 UTC m=+5981.219324311" observedRunningTime="2026-01-29 18:01:38.892139234 +0000 UTC m=+5981.800858526" watchObservedRunningTime="2026-01-29 18:01:44.929477222 +0000 UTC m=+5987.838196504" Jan 29 18:01:45 crc kubenswrapper[4886]: I0129 18:01:45.009454 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 18:01:45 crc kubenswrapper[4886]: I0129 18:01:45.144497 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsjfd"] Jan 29 18:01:46 crc kubenswrapper[4886]: I0129 18:01:46.951275 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsjfd" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="registry-server" containerID="cri-o://57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a" gracePeriod=2 Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.435042 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.481414 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-catalog-content\") pod \"7ceed770-f253-4044-92f0-c8a07b89b621\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.482524 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-utilities\") pod \"7ceed770-f253-4044-92f0-c8a07b89b621\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.482658 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlxp8\" (UniqueName: \"kubernetes.io/projected/7ceed770-f253-4044-92f0-c8a07b89b621-kube-api-access-nlxp8\") pod \"7ceed770-f253-4044-92f0-c8a07b89b621\" (UID: \"7ceed770-f253-4044-92f0-c8a07b89b621\") " Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.483553 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-utilities" (OuterVolumeSpecName: "utilities") pod "7ceed770-f253-4044-92f0-c8a07b89b621" (UID: "7ceed770-f253-4044-92f0-c8a07b89b621"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.483917 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.488177 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ceed770-f253-4044-92f0-c8a07b89b621-kube-api-access-nlxp8" (OuterVolumeSpecName: "kube-api-access-nlxp8") pod "7ceed770-f253-4044-92f0-c8a07b89b621" (UID: "7ceed770-f253-4044-92f0-c8a07b89b621"). InnerVolumeSpecName "kube-api-access-nlxp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.539738 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ceed770-f253-4044-92f0-c8a07b89b621" (UID: "7ceed770-f253-4044-92f0-c8a07b89b621"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.586265 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlxp8\" (UniqueName: \"kubernetes.io/projected/7ceed770-f253-4044-92f0-c8a07b89b621-kube-api-access-nlxp8\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.586304 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ceed770-f253-4044-92f0-c8a07b89b621-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.964126 4886 generic.go:334] "Generic (PLEG): container finished" podID="7ceed770-f253-4044-92f0-c8a07b89b621" containerID="57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a" exitCode=0 Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.964162 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerDied","Data":"57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a"} Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.964187 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsjfd" event={"ID":"7ceed770-f253-4044-92f0-c8a07b89b621","Type":"ContainerDied","Data":"fb5b6b721dd0a2050f48ef0e26fac1871e4ba7b7b47b95e41a00c0852ef2c55b"} Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.964204 4886 scope.go:117] "RemoveContainer" containerID="57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a" Jan 29 18:01:47 crc kubenswrapper[4886]: I0129 18:01:47.964388 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsjfd" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.001676 4886 scope.go:117] "RemoveContainer" containerID="a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.019394 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsjfd"] Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.032436 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsjfd"] Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.049462 4886 scope.go:117] "RemoveContainer" containerID="bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.099055 4886 scope.go:117] "RemoveContainer" containerID="57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a" Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.101222 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a\": container with ID starting with 57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a not found: ID does not exist" containerID="57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.101286 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a"} err="failed to get container status \"57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a\": rpc error: code = NotFound desc = could not find container \"57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a\": container with ID starting with 57611bb9d4c88485f704785c6260beffdf3364717c2a0a0bf33dbfb1aa8bb69a not found: ID does not exist" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.101316 4886 scope.go:117] "RemoveContainer" containerID="a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86" Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.101870 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86\": container with ID starting with a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86 not found: ID does not exist" containerID="a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.101908 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86"} err="failed to get container status \"a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86\": rpc error: code = NotFound desc = could not find container \"a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86\": container with ID starting with a26e038fba7b20c6bbd8f67983806ee67b86edda4f42bed3b1e5dc6e19691d86 not found: ID does not exist" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.101935 4886 scope.go:117] "RemoveContainer" containerID="bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b" Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.103769 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b\": container with ID starting with bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b not found: ID does not exist" containerID="bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.103816 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b"} err="failed to get container status \"bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b\": rpc error: code = NotFound desc = could not find container \"bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b\": container with ID starting with bedb65e37127565b5119ee8d90f572bdf6b6802d26fcd6797bad10fc8e07c14b not found: ID does not exist" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.571466 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5rzj"] Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.572001 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="extract-utilities" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.572021 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="extract-utilities" Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.572047 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="extract-content" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.572056 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="extract-content" Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.572070 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="registry-server" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.572079 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="registry-server" Jan 29 18:01:48 crc kubenswrapper[4886]: E0129 18:01:48.572129 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5584-12c8-4933-868d-bbb9e04f7bb3" containerName="keystone-cron" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.572137 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5584-12c8-4933-868d-bbb9e04f7bb3" containerName="keystone-cron" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.572414 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5584-12c8-4933-868d-bbb9e04f7bb3" containerName="keystone-cron" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.572468 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" containerName="registry-server" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.574600 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.610885 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-catalog-content\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.611137 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mp79\" (UniqueName: \"kubernetes.io/projected/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-kube-api-access-5mp79\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.611186 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-utilities\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.626168 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ceed770-f253-4044-92f0-c8a07b89b621" path="/var/lib/kubelet/pods/7ceed770-f253-4044-92f0-c8a07b89b621/volumes" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.626885 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5rzj"] Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.713240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-catalog-content\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.713534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mp79\" (UniqueName: \"kubernetes.io/projected/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-kube-api-access-5mp79\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.713570 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-utilities\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.714111 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-catalog-content\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.714343 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-utilities\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.741218 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mp79\" (UniqueName: \"kubernetes.io/projected/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-kube-api-access-5mp79\") pod \"certified-operators-q5rzj\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:48 crc kubenswrapper[4886]: I0129 18:01:48.898099 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:49 crc kubenswrapper[4886]: I0129 18:01:49.557539 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5rzj"] Jan 29 18:01:49 crc kubenswrapper[4886]: I0129 18:01:49.990730 4886 generic.go:334] "Generic (PLEG): container finished" podID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerID="72d80eebb60ae8c08eef0770791971e0fbc24b07b588eb7895b7a9f050ba5462" exitCode=0 Jan 29 18:01:49 crc kubenswrapper[4886]: I0129 18:01:49.990794 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerDied","Data":"72d80eebb60ae8c08eef0770791971e0fbc24b07b588eb7895b7a9f050ba5462"} Jan 29 18:01:49 crc kubenswrapper[4886]: I0129 18:01:49.990835 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerStarted","Data":"9ab7c1c2b880cc6e9d45935d6da276f15ad16d601658ac302237a0b2c36661a6"} Jan 29 18:01:51 crc kubenswrapper[4886]: I0129 18:01:51.004965 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerStarted","Data":"5f7aea5bf74235eef90aae221f6a2aef210bdde2ace2bb420c8f950bde0f3825"} Jan 29 18:01:52 crc kubenswrapper[4886]: I0129 18:01:52.014678 4886 generic.go:334] "Generic (PLEG): container finished" podID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerID="5f7aea5bf74235eef90aae221f6a2aef210bdde2ace2bb420c8f950bde0f3825" exitCode=0 Jan 29 18:01:52 crc kubenswrapper[4886]: I0129 18:01:52.014780 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerDied","Data":"5f7aea5bf74235eef90aae221f6a2aef210bdde2ace2bb420c8f950bde0f3825"} Jan 29 18:01:53 crc kubenswrapper[4886]: I0129 18:01:53.029062 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerStarted","Data":"60dffdf8cb175f42305628a1f37333e3d75d62cb2e4e50881c05113585bcdac4"} Jan 29 18:01:53 crc kubenswrapper[4886]: I0129 18:01:53.054104 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5rzj" podStartSLOduration=2.639959664 podStartE2EDuration="5.054087096s" podCreationTimestamp="2026-01-29 18:01:48 +0000 UTC" firstStartedPulling="2026-01-29 18:01:49.993395938 +0000 UTC m=+5992.902115230" lastFinishedPulling="2026-01-29 18:01:52.40752336 +0000 UTC m=+5995.316242662" observedRunningTime="2026-01-29 18:01:53.043897558 +0000 UTC m=+5995.952616840" watchObservedRunningTime="2026-01-29 18:01:53.054087096 +0000 UTC m=+5995.962806358" Jan 29 18:01:53 crc kubenswrapper[4886]: I0129 18:01:53.615710 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:01:53 crc kubenswrapper[4886]: E0129 18:01:53.616278 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:01:58 crc kubenswrapper[4886]: I0129 18:01:58.898772 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:58 crc kubenswrapper[4886]: I0129 18:01:58.899512 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:58 crc kubenswrapper[4886]: I0129 18:01:58.958602 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:59 crc kubenswrapper[4886]: I0129 18:01:59.187623 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:01:59 crc kubenswrapper[4886]: I0129 18:01:59.252230 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5rzj"] Jan 29 18:02:01 crc kubenswrapper[4886]: I0129 18:02:01.120824 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5rzj" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="registry-server" containerID="cri-o://60dffdf8cb175f42305628a1f37333e3d75d62cb2e4e50881c05113585bcdac4" gracePeriod=2 Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.132475 4886 generic.go:334] "Generic (PLEG): container finished" podID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerID="60dffdf8cb175f42305628a1f37333e3d75d62cb2e4e50881c05113585bcdac4" exitCode=0 Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.132531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerDied","Data":"60dffdf8cb175f42305628a1f37333e3d75d62cb2e4e50881c05113585bcdac4"} Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.280821 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.416034 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-catalog-content\") pod \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.416253 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-utilities\") pod \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.416607 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mp79\" (UniqueName: \"kubernetes.io/projected/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-kube-api-access-5mp79\") pod \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\" (UID: \"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4\") " Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.418558 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-utilities" (OuterVolumeSpecName: "utilities") pod "3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" (UID: "3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.420109 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.427713 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-kube-api-access-5mp79" (OuterVolumeSpecName: "kube-api-access-5mp79") pod "3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" (UID: "3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4"). InnerVolumeSpecName "kube-api-access-5mp79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.473473 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" (UID: "3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.521794 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:02:02 crc kubenswrapper[4886]: I0129 18:02:02.521827 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mp79\" (UniqueName: \"kubernetes.io/projected/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4-kube-api-access-5mp79\") on node \"crc\" DevicePath \"\"" Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.153280 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5rzj" event={"ID":"3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4","Type":"ContainerDied","Data":"9ab7c1c2b880cc6e9d45935d6da276f15ad16d601658ac302237a0b2c36661a6"} Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.153361 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5rzj" Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.153737 4886 scope.go:117] "RemoveContainer" containerID="60dffdf8cb175f42305628a1f37333e3d75d62cb2e4e50881c05113585bcdac4" Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.197861 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5rzj"] Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.199178 4886 scope.go:117] "RemoveContainer" containerID="5f7aea5bf74235eef90aae221f6a2aef210bdde2ace2bb420c8f950bde0f3825" Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.210068 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q5rzj"] Jan 29 18:02:03 crc kubenswrapper[4886]: I0129 18:02:03.230659 4886 scope.go:117] "RemoveContainer" containerID="72d80eebb60ae8c08eef0770791971e0fbc24b07b588eb7895b7a9f050ba5462" Jan 29 18:02:04 crc kubenswrapper[4886]: I0129 18:02:04.640225 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" path="/var/lib/kubelet/pods/3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4/volumes" Jan 29 18:02:07 crc kubenswrapper[4886]: I0129 18:02:07.617406 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:02:07 crc kubenswrapper[4886]: E0129 18:02:07.618566 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:02:20 crc kubenswrapper[4886]: I0129 18:02:20.617517 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:02:20 crc kubenswrapper[4886]: E0129 18:02:20.619111 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:02:35 crc kubenswrapper[4886]: I0129 18:02:35.615711 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:02:35 crc kubenswrapper[4886]: E0129 18:02:35.616827 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:02:48 crc kubenswrapper[4886]: I0129 18:02:48.623845 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:02:48 crc kubenswrapper[4886]: E0129 18:02:48.625220 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:03:01 crc kubenswrapper[4886]: I0129 18:03:01.616101 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:03:01 crc kubenswrapper[4886]: E0129 18:03:01.617399 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:03:16 crc kubenswrapper[4886]: I0129 18:03:16.616620 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:03:16 crc kubenswrapper[4886]: E0129 18:03:16.619965 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.299204 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lsq2b/must-gather-jss9f"] Jan 29 18:03:19 crc kubenswrapper[4886]: E0129 18:03:19.300198 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="extract-content" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.300219 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="extract-content" Jan 29 18:03:19 crc kubenswrapper[4886]: E0129 18:03:19.300239 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="registry-server" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.300247 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="registry-server" Jan 29 18:03:19 crc kubenswrapper[4886]: E0129 18:03:19.300261 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="extract-utilities" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.300268 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="extract-utilities" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.300520 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd795ce-91a1-4c71-8332-d1c6b8b9fdf4" containerName="registry-server" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.301788 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.305785 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lsq2b"/"kube-root-ca.crt" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.305990 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lsq2b"/"default-dockercfg-xmln4" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.309002 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lsq2b"/"openshift-service-ca.crt" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.317680 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lsq2b/must-gather-jss9f"] Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.358250 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv85m\" (UniqueName: \"kubernetes.io/projected/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-kube-api-access-lv85m\") pod \"must-gather-jss9f\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.358314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-must-gather-output\") pod \"must-gather-jss9f\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.460688 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv85m\" (UniqueName: \"kubernetes.io/projected/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-kube-api-access-lv85m\") pod \"must-gather-jss9f\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.460760 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-must-gather-output\") pod \"must-gather-jss9f\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.461230 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-must-gather-output\") pod \"must-gather-jss9f\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.480165 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv85m\" (UniqueName: \"kubernetes.io/projected/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-kube-api-access-lv85m\") pod \"must-gather-jss9f\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:19 crc kubenswrapper[4886]: I0129 18:03:19.624727 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:03:20 crc kubenswrapper[4886]: I0129 18:03:20.295716 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lsq2b/must-gather-jss9f"] Jan 29 18:03:21 crc kubenswrapper[4886]: I0129 18:03:21.251228 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/must-gather-jss9f" event={"ID":"fd01fd0d-8339-41ba-be01-6c3b723b2ec9","Type":"ContainerStarted","Data":"9ee0c2c0be8a2f9c8d72706f166b6ec33e3d7ddd1d43f8c478fdf3404b486eb6"} Jan 29 18:03:27 crc kubenswrapper[4886]: I0129 18:03:27.625011 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:03:27 crc kubenswrapper[4886]: E0129 18:03:27.625916 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:03:29 crc kubenswrapper[4886]: I0129 18:03:29.338770 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/must-gather-jss9f" event={"ID":"fd01fd0d-8339-41ba-be01-6c3b723b2ec9","Type":"ContainerStarted","Data":"941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480"} Jan 29 18:03:29 crc kubenswrapper[4886]: I0129 18:03:29.339440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/must-gather-jss9f" event={"ID":"fd01fd0d-8339-41ba-be01-6c3b723b2ec9","Type":"ContainerStarted","Data":"2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71"} Jan 29 18:03:29 crc kubenswrapper[4886]: I0129 18:03:29.371786 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lsq2b/must-gather-jss9f" podStartSLOduration=2.255223162 podStartE2EDuration="10.371766909s" podCreationTimestamp="2026-01-29 18:03:19 +0000 UTC" firstStartedPulling="2026-01-29 18:03:20.298512454 +0000 UTC m=+6083.207231726" lastFinishedPulling="2026-01-29 18:03:28.415056191 +0000 UTC m=+6091.323775473" observedRunningTime="2026-01-29 18:03:29.359484461 +0000 UTC m=+6092.268203733" watchObservedRunningTime="2026-01-29 18:03:29.371766909 +0000 UTC m=+6092.280486181" Jan 29 18:03:33 crc kubenswrapper[4886]: I0129 18:03:33.900909 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lsq2b/crc-debug-lpc7l"] Jan 29 18:03:33 crc kubenswrapper[4886]: I0129 18:03:33.903528 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.018179 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcswg\" (UniqueName: \"kubernetes.io/projected/69c46e61-34d0-44e6-89e0-2f9d618c543a-kube-api-access-xcswg\") pod \"crc-debug-lpc7l\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.018447 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c46e61-34d0-44e6-89e0-2f9d618c543a-host\") pod \"crc-debug-lpc7l\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.120715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c46e61-34d0-44e6-89e0-2f9d618c543a-host\") pod \"crc-debug-lpc7l\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.120840 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c46e61-34d0-44e6-89e0-2f9d618c543a-host\") pod \"crc-debug-lpc7l\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.121122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcswg\" (UniqueName: \"kubernetes.io/projected/69c46e61-34d0-44e6-89e0-2f9d618c543a-kube-api-access-xcswg\") pod \"crc-debug-lpc7l\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.156159 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcswg\" (UniqueName: \"kubernetes.io/projected/69c46e61-34d0-44e6-89e0-2f9d618c543a-kube-api-access-xcswg\") pod \"crc-debug-lpc7l\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.227695 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:03:34 crc kubenswrapper[4886]: I0129 18:03:34.400982 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" event={"ID":"69c46e61-34d0-44e6-89e0-2f9d618c543a","Type":"ContainerStarted","Data":"3097596f1f56f04205d69d1e7a2a030494676385b6096f7976a95369dd790bf0"} Jan 29 18:03:40 crc kubenswrapper[4886]: I0129 18:03:40.617231 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:03:40 crc kubenswrapper[4886]: E0129 18:03:40.617879 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:03:47 crc kubenswrapper[4886]: I0129 18:03:47.521270 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" event={"ID":"69c46e61-34d0-44e6-89e0-2f9d618c543a","Type":"ContainerStarted","Data":"9151f75a515b793b76d61e304966261ea994214c86da5ff66a0d5a788f6197a1"} Jan 29 18:03:47 crc kubenswrapper[4886]: I0129 18:03:47.542771 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" podStartSLOduration=1.938434142 podStartE2EDuration="14.542755533s" podCreationTimestamp="2026-01-29 18:03:33 +0000 UTC" firstStartedPulling="2026-01-29 18:03:34.295552039 +0000 UTC m=+6097.204271311" lastFinishedPulling="2026-01-29 18:03:46.89987342 +0000 UTC m=+6109.808592702" observedRunningTime="2026-01-29 18:03:47.538053379 +0000 UTC m=+6110.446772651" watchObservedRunningTime="2026-01-29 18:03:47.542755533 +0000 UTC m=+6110.451474805" Jan 29 18:03:51 crc kubenswrapper[4886]: I0129 18:03:51.615064 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:03:51 crc kubenswrapper[4886]: E0129 18:03:51.616017 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:04:04 crc kubenswrapper[4886]: I0129 18:04:04.619008 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:04:04 crc kubenswrapper[4886]: E0129 18:04:04.619659 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:04:09 crc kubenswrapper[4886]: I0129 18:04:09.753940 4886 generic.go:334] "Generic (PLEG): container finished" podID="69c46e61-34d0-44e6-89e0-2f9d618c543a" containerID="9151f75a515b793b76d61e304966261ea994214c86da5ff66a0d5a788f6197a1" exitCode=0 Jan 29 18:04:09 crc kubenswrapper[4886]: I0129 18:04:09.754009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" event={"ID":"69c46e61-34d0-44e6-89e0-2f9d618c543a","Type":"ContainerDied","Data":"9151f75a515b793b76d61e304966261ea994214c86da5ff66a0d5a788f6197a1"} Jan 29 18:04:10 crc kubenswrapper[4886]: I0129 18:04:10.922133 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:04:10 crc kubenswrapper[4886]: I0129 18:04:10.949984 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lsq2b/crc-debug-lpc7l"] Jan 29 18:04:10 crc kubenswrapper[4886]: I0129 18:04:10.959804 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lsq2b/crc-debug-lpc7l"] Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.014956 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c46e61-34d0-44e6-89e0-2f9d618c543a-host\") pod \"69c46e61-34d0-44e6-89e0-2f9d618c543a\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.015079 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69c46e61-34d0-44e6-89e0-2f9d618c543a-host" (OuterVolumeSpecName: "host") pod "69c46e61-34d0-44e6-89e0-2f9d618c543a" (UID: "69c46e61-34d0-44e6-89e0-2f9d618c543a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.015106 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcswg\" (UniqueName: \"kubernetes.io/projected/69c46e61-34d0-44e6-89e0-2f9d618c543a-kube-api-access-xcswg\") pod \"69c46e61-34d0-44e6-89e0-2f9d618c543a\" (UID: \"69c46e61-34d0-44e6-89e0-2f9d618c543a\") " Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.015632 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c46e61-34d0-44e6-89e0-2f9d618c543a-host\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.020529 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c46e61-34d0-44e6-89e0-2f9d618c543a-kube-api-access-xcswg" (OuterVolumeSpecName: "kube-api-access-xcswg") pod "69c46e61-34d0-44e6-89e0-2f9d618c543a" (UID: "69c46e61-34d0-44e6-89e0-2f9d618c543a"). InnerVolumeSpecName "kube-api-access-xcswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.117445 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcswg\" (UniqueName: \"kubernetes.io/projected/69c46e61-34d0-44e6-89e0-2f9d618c543a-kube-api-access-xcswg\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.789546 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3097596f1f56f04205d69d1e7a2a030494676385b6096f7976a95369dd790bf0" Jan 29 18:04:11 crc kubenswrapper[4886]: I0129 18:04:11.789675 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-lpc7l" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.164294 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lsq2b/crc-debug-f6g6p"] Jan 29 18:04:12 crc kubenswrapper[4886]: E0129 18:04:12.165151 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c46e61-34d0-44e6-89e0-2f9d618c543a" containerName="container-00" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.165180 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c46e61-34d0-44e6-89e0-2f9d618c543a" containerName="container-00" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.165515 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c46e61-34d0-44e6-89e0-2f9d618c543a" containerName="container-00" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.166731 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.256038 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-host\") pod \"crc-debug-f6g6p\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.256149 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxthf\" (UniqueName: \"kubernetes.io/projected/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-kube-api-access-qxthf\") pod \"crc-debug-f6g6p\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.359046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxthf\" (UniqueName: \"kubernetes.io/projected/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-kube-api-access-qxthf\") pod \"crc-debug-f6g6p\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.365445 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-host\") pod \"crc-debug-f6g6p\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.365578 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-host\") pod \"crc-debug-f6g6p\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.391417 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxthf\" (UniqueName: \"kubernetes.io/projected/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-kube-api-access-qxthf\") pod \"crc-debug-f6g6p\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.494469 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.644198 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c46e61-34d0-44e6-89e0-2f9d618c543a" path="/var/lib/kubelet/pods/69c46e61-34d0-44e6-89e0-2f9d618c543a/volumes" Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.799802 4886 generic.go:334] "Generic (PLEG): container finished" podID="ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" containerID="4cbfd678d0e8c9a0c43080d33d221a63872ea34632a51cb0a6c22a5407b09f79" exitCode=1 Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.799905 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" event={"ID":"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf","Type":"ContainerDied","Data":"4cbfd678d0e8c9a0c43080d33d221a63872ea34632a51cb0a6c22a5407b09f79"} Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.800352 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" event={"ID":"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf","Type":"ContainerStarted","Data":"7ac46cc770b9f49f63ae79a1a7b6a62e74f610aa8420a3a20c45964faaf5ceab"} Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.844867 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lsq2b/crc-debug-f6g6p"] Jan 29 18:04:12 crc kubenswrapper[4886]: I0129 18:04:12.853458 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lsq2b/crc-debug-f6g6p"] Jan 29 18:04:13 crc kubenswrapper[4886]: I0129 18:04:13.923909 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.004278 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxthf\" (UniqueName: \"kubernetes.io/projected/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-kube-api-access-qxthf\") pod \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.004562 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-host\") pod \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\" (UID: \"ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf\") " Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.004924 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-host" (OuterVolumeSpecName: "host") pod "ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" (UID: "ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.005407 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-host\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.017043 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-kube-api-access-qxthf" (OuterVolumeSpecName: "kube-api-access-qxthf") pod "ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" (UID: "ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf"). InnerVolumeSpecName "kube-api-access-qxthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.109106 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxthf\" (UniqueName: \"kubernetes.io/projected/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf-kube-api-access-qxthf\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.631665 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" path="/var/lib/kubelet/pods/ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf/volumes" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.820493 4886 scope.go:117] "RemoveContainer" containerID="4cbfd678d0e8c9a0c43080d33d221a63872ea34632a51cb0a6c22a5407b09f79" Jan 29 18:04:14 crc kubenswrapper[4886]: I0129 18:04:14.820555 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/crc-debug-f6g6p" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.568300 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kmvr5"] Jan 29 18:04:15 crc kubenswrapper[4886]: E0129 18:04:15.569311 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" containerName="container-00" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.569406 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" containerName="container-00" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.569903 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff59c3a7-8e4e-4c1d-a0f0-ffd6fc31ddbf" containerName="container-00" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.583574 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.586235 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmvr5"] Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.686200 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-utilities\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.686295 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-catalog-content\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.686417 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfjc5\" (UniqueName: \"kubernetes.io/projected/cf73c735-d3aa-476b-9390-6a150d51a290-kube-api-access-lfjc5\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.790177 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfjc5\" (UniqueName: \"kubernetes.io/projected/cf73c735-d3aa-476b-9390-6a150d51a290-kube-api-access-lfjc5\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.790372 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-utilities\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.790432 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-catalog-content\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.790982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-utilities\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.791032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-catalog-content\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.817265 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfjc5\" (UniqueName: \"kubernetes.io/projected/cf73c735-d3aa-476b-9390-6a150d51a290-kube-api-access-lfjc5\") pod \"redhat-marketplace-kmvr5\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:15 crc kubenswrapper[4886]: I0129 18:04:15.909037 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:16 crc kubenswrapper[4886]: I0129 18:04:16.419349 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmvr5"] Jan 29 18:04:16 crc kubenswrapper[4886]: W0129 18:04:16.421580 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf73c735_d3aa_476b_9390_6a150d51a290.slice/crio-3ea04023ad6f2098f354054573352189e64af7b720c4d23b8d794816a83966a1 WatchSource:0}: Error finding container 3ea04023ad6f2098f354054573352189e64af7b720c4d23b8d794816a83966a1: Status 404 returned error can't find the container with id 3ea04023ad6f2098f354054573352189e64af7b720c4d23b8d794816a83966a1 Jan 29 18:04:16 crc kubenswrapper[4886]: I0129 18:04:16.847066 4886 generic.go:334] "Generic (PLEG): container finished" podID="cf73c735-d3aa-476b-9390-6a150d51a290" containerID="54c179145b068653a1e221165954ed6dc1e5732be8151bfe1ac6f1f61a83422f" exitCode=0 Jan 29 18:04:16 crc kubenswrapper[4886]: I0129 18:04:16.847137 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerDied","Data":"54c179145b068653a1e221165954ed6dc1e5732be8151bfe1ac6f1f61a83422f"} Jan 29 18:04:16 crc kubenswrapper[4886]: I0129 18:04:16.847310 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerStarted","Data":"3ea04023ad6f2098f354054573352189e64af7b720c4d23b8d794816a83966a1"} Jan 29 18:04:18 crc kubenswrapper[4886]: I0129 18:04:18.864849 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerStarted","Data":"dff1bde7e6d514472b2010c2fd3b5381b5a397e39be70a375089aa152b0fac0f"} Jan 29 18:04:19 crc kubenswrapper[4886]: I0129 18:04:19.616111 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:04:19 crc kubenswrapper[4886]: E0129 18:04:19.616760 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:04:19 crc kubenswrapper[4886]: I0129 18:04:19.876083 4886 generic.go:334] "Generic (PLEG): container finished" podID="cf73c735-d3aa-476b-9390-6a150d51a290" containerID="dff1bde7e6d514472b2010c2fd3b5381b5a397e39be70a375089aa152b0fac0f" exitCode=0 Jan 29 18:04:19 crc kubenswrapper[4886]: I0129 18:04:19.876124 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerDied","Data":"dff1bde7e6d514472b2010c2fd3b5381b5a397e39be70a375089aa152b0fac0f"} Jan 29 18:04:20 crc kubenswrapper[4886]: I0129 18:04:20.890073 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerStarted","Data":"27750b35201061f1ffd4a205c3e0c5eef07cfdb632a99934639047305555bc63"} Jan 29 18:04:20 crc kubenswrapper[4886]: I0129 18:04:20.917017 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kmvr5" podStartSLOduration=2.483100357 podStartE2EDuration="5.916998993s" podCreationTimestamp="2026-01-29 18:04:15 +0000 UTC" firstStartedPulling="2026-01-29 18:04:16.849516628 +0000 UTC m=+6139.758235910" lastFinishedPulling="2026-01-29 18:04:20.283415254 +0000 UTC m=+6143.192134546" observedRunningTime="2026-01-29 18:04:20.906233058 +0000 UTC m=+6143.814952340" watchObservedRunningTime="2026-01-29 18:04:20.916998993 +0000 UTC m=+6143.825718275" Jan 29 18:04:20 crc kubenswrapper[4886]: I0129 18:04:20.969825 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lj627"] Jan 29 18:04:20 crc kubenswrapper[4886]: I0129 18:04:20.972741 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:20 crc kubenswrapper[4886]: I0129 18:04:20.997029 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lj627"] Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.135502 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-utilities\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.135691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-catalog-content\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.135928 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8wx\" (UniqueName: \"kubernetes.io/projected/ace6b3f5-2f50-4320-87db-40229f5f2cfa-kube-api-access-bn8wx\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.238480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn8wx\" (UniqueName: \"kubernetes.io/projected/ace6b3f5-2f50-4320-87db-40229f5f2cfa-kube-api-access-bn8wx\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.238627 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-utilities\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.238698 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-catalog-content\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.239222 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-catalog-content\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.239374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-utilities\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.256465 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn8wx\" (UniqueName: \"kubernetes.io/projected/ace6b3f5-2f50-4320-87db-40229f5f2cfa-kube-api-access-bn8wx\") pod \"redhat-operators-lj627\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.307283 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.830082 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lj627"] Jan 29 18:04:21 crc kubenswrapper[4886]: I0129 18:04:21.905469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerStarted","Data":"933561cc3f3d4b68e66a04703782c2021621ec267367f5610272c1e684a67323"} Jan 29 18:04:22 crc kubenswrapper[4886]: I0129 18:04:22.916275 4886 generic.go:334] "Generic (PLEG): container finished" podID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerID="468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06" exitCode=0 Jan 29 18:04:22 crc kubenswrapper[4886]: I0129 18:04:22.916384 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerDied","Data":"468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06"} Jan 29 18:04:23 crc kubenswrapper[4886]: I0129 18:04:23.927062 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerStarted","Data":"7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592"} Jan 29 18:04:25 crc kubenswrapper[4886]: I0129 18:04:25.909471 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:25 crc kubenswrapper[4886]: I0129 18:04:25.910178 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:25 crc kubenswrapper[4886]: I0129 18:04:25.979072 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:26 crc kubenswrapper[4886]: I0129 18:04:26.032062 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:29 crc kubenswrapper[4886]: I0129 18:04:29.987276 4886 generic.go:334] "Generic (PLEG): container finished" podID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerID="7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592" exitCode=0 Jan 29 18:04:29 crc kubenswrapper[4886]: I0129 18:04:29.987363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerDied","Data":"7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592"} Jan 29 18:04:31 crc kubenswrapper[4886]: I0129 18:04:31.615764 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:04:31 crc kubenswrapper[4886]: E0129 18:04:31.616408 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:04:32 crc kubenswrapper[4886]: I0129 18:04:32.010937 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerStarted","Data":"11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418"} Jan 29 18:04:32 crc kubenswrapper[4886]: I0129 18:04:32.034742 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lj627" podStartSLOduration=3.433709689 podStartE2EDuration="12.034711853s" podCreationTimestamp="2026-01-29 18:04:20 +0000 UTC" firstStartedPulling="2026-01-29 18:04:22.919689695 +0000 UTC m=+6145.828408977" lastFinishedPulling="2026-01-29 18:04:31.520691859 +0000 UTC m=+6154.429411141" observedRunningTime="2026-01-29 18:04:32.030440522 +0000 UTC m=+6154.939159794" watchObservedRunningTime="2026-01-29 18:04:32.034711853 +0000 UTC m=+6154.943431155" Jan 29 18:04:33 crc kubenswrapper[4886]: I0129 18:04:33.573203 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmvr5"] Jan 29 18:04:33 crc kubenswrapper[4886]: I0129 18:04:33.573937 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kmvr5" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="registry-server" containerID="cri-o://27750b35201061f1ffd4a205c3e0c5eef07cfdb632a99934639047305555bc63" gracePeriod=2 Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.045350 4886 generic.go:334] "Generic (PLEG): container finished" podID="cf73c735-d3aa-476b-9390-6a150d51a290" containerID="27750b35201061f1ffd4a205c3e0c5eef07cfdb632a99934639047305555bc63" exitCode=0 Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.045438 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerDied","Data":"27750b35201061f1ffd4a205c3e0c5eef07cfdb632a99934639047305555bc63"} Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.223595 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.296821 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfjc5\" (UniqueName: \"kubernetes.io/projected/cf73c735-d3aa-476b-9390-6a150d51a290-kube-api-access-lfjc5\") pod \"cf73c735-d3aa-476b-9390-6a150d51a290\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.296918 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-utilities\") pod \"cf73c735-d3aa-476b-9390-6a150d51a290\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.296968 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-catalog-content\") pod \"cf73c735-d3aa-476b-9390-6a150d51a290\" (UID: \"cf73c735-d3aa-476b-9390-6a150d51a290\") " Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.298206 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-utilities" (OuterVolumeSpecName: "utilities") pod "cf73c735-d3aa-476b-9390-6a150d51a290" (UID: "cf73c735-d3aa-476b-9390-6a150d51a290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.299131 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.305834 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf73c735-d3aa-476b-9390-6a150d51a290-kube-api-access-lfjc5" (OuterVolumeSpecName: "kube-api-access-lfjc5") pod "cf73c735-d3aa-476b-9390-6a150d51a290" (UID: "cf73c735-d3aa-476b-9390-6a150d51a290"). InnerVolumeSpecName "kube-api-access-lfjc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.324674 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf73c735-d3aa-476b-9390-6a150d51a290" (UID: "cf73c735-d3aa-476b-9390-6a150d51a290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.402102 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfjc5\" (UniqueName: \"kubernetes.io/projected/cf73c735-d3aa-476b-9390-6a150d51a290-kube-api-access-lfjc5\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:34 crc kubenswrapper[4886]: I0129 18:04:34.402153 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf73c735-d3aa-476b-9390-6a150d51a290-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.056951 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmvr5" event={"ID":"cf73c735-d3aa-476b-9390-6a150d51a290","Type":"ContainerDied","Data":"3ea04023ad6f2098f354054573352189e64af7b720c4d23b8d794816a83966a1"} Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.058208 4886 scope.go:117] "RemoveContainer" containerID="27750b35201061f1ffd4a205c3e0c5eef07cfdb632a99934639047305555bc63" Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.058499 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmvr5" Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.087941 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmvr5"] Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.095691 4886 scope.go:117] "RemoveContainer" containerID="dff1bde7e6d514472b2010c2fd3b5381b5a397e39be70a375089aa152b0fac0f" Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.097234 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmvr5"] Jan 29 18:04:35 crc kubenswrapper[4886]: I0129 18:04:35.118004 4886 scope.go:117] "RemoveContainer" containerID="54c179145b068653a1e221165954ed6dc1e5732be8151bfe1ac6f1f61a83422f" Jan 29 18:04:36 crc kubenswrapper[4886]: I0129 18:04:36.626887 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" path="/var/lib/kubelet/pods/cf73c735-d3aa-476b-9390-6a150d51a290/volumes" Jan 29 18:04:41 crc kubenswrapper[4886]: I0129 18:04:41.308271 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:41 crc kubenswrapper[4886]: I0129 18:04:41.308792 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:42 crc kubenswrapper[4886]: I0129 18:04:42.362229 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lj627" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="registry-server" probeResult="failure" output=< Jan 29 18:04:42 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 18:04:42 crc kubenswrapper[4886]: > Jan 29 18:04:45 crc kubenswrapper[4886]: I0129 18:04:45.615666 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:04:45 crc kubenswrapper[4886]: E0129 18:04:45.616351 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:04:51 crc kubenswrapper[4886]: I0129 18:04:51.405695 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:51 crc kubenswrapper[4886]: I0129 18:04:51.462696 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:52 crc kubenswrapper[4886]: I0129 18:04:52.175359 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lj627"] Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.230173 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lj627" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="registry-server" containerID="cri-o://11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418" gracePeriod=2 Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.797597 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.866101 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-catalog-content\") pod \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.866216 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn8wx\" (UniqueName: \"kubernetes.io/projected/ace6b3f5-2f50-4320-87db-40229f5f2cfa-kube-api-access-bn8wx\") pod \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.866291 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-utilities\") pod \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\" (UID: \"ace6b3f5-2f50-4320-87db-40229f5f2cfa\") " Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.867183 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-utilities" (OuterVolumeSpecName: "utilities") pod "ace6b3f5-2f50-4320-87db-40229f5f2cfa" (UID: "ace6b3f5-2f50-4320-87db-40229f5f2cfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.876546 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace6b3f5-2f50-4320-87db-40229f5f2cfa-kube-api-access-bn8wx" (OuterVolumeSpecName: "kube-api-access-bn8wx") pod "ace6b3f5-2f50-4320-87db-40229f5f2cfa" (UID: "ace6b3f5-2f50-4320-87db-40229f5f2cfa"). InnerVolumeSpecName "kube-api-access-bn8wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.970008 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.970075 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn8wx\" (UniqueName: \"kubernetes.io/projected/ace6b3f5-2f50-4320-87db-40229f5f2cfa-kube-api-access-bn8wx\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:53 crc kubenswrapper[4886]: I0129 18:04:53.990066 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ace6b3f5-2f50-4320-87db-40229f5f2cfa" (UID: "ace6b3f5-2f50-4320-87db-40229f5f2cfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.072050 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace6b3f5-2f50-4320-87db-40229f5f2cfa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.244189 4886 generic.go:334] "Generic (PLEG): container finished" podID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerID="11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418" exitCode=0 Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.244227 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerDied","Data":"11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418"} Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.244280 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lj627" event={"ID":"ace6b3f5-2f50-4320-87db-40229f5f2cfa","Type":"ContainerDied","Data":"933561cc3f3d4b68e66a04703782c2021621ec267367f5610272c1e684a67323"} Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.244305 4886 scope.go:117] "RemoveContainer" containerID="11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.244315 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lj627" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.293497 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lj627"] Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.293671 4886 scope.go:117] "RemoveContainer" containerID="7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.302236 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lj627"] Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.328047 4886 scope.go:117] "RemoveContainer" containerID="468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.417723 4886 scope.go:117] "RemoveContainer" containerID="11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418" Jan 29 18:04:54 crc kubenswrapper[4886]: E0129 18:04:54.421770 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418\": container with ID starting with 11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418 not found: ID does not exist" containerID="11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.421816 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418"} err="failed to get container status \"11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418\": rpc error: code = NotFound desc = could not find container \"11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418\": container with ID starting with 11705f34993c6638ba8642b38964a86a5de557e9f2d5c74da2dc5a7240803418 not found: ID does not exist" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.421852 4886 scope.go:117] "RemoveContainer" containerID="7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592" Jan 29 18:04:54 crc kubenswrapper[4886]: E0129 18:04:54.422260 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592\": container with ID starting with 7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592 not found: ID does not exist" containerID="7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.422341 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592"} err="failed to get container status \"7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592\": rpc error: code = NotFound desc = could not find container \"7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592\": container with ID starting with 7d317f44136dcc76fb7783151dfa87ebccf117dd0b425dcb78ac3d5980079592 not found: ID does not exist" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.422369 4886 scope.go:117] "RemoveContainer" containerID="468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06" Jan 29 18:04:54 crc kubenswrapper[4886]: E0129 18:04:54.424701 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06\": container with ID starting with 468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06 not found: ID does not exist" containerID="468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.424770 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06"} err="failed to get container status \"468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06\": rpc error: code = NotFound desc = could not find container \"468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06\": container with ID starting with 468f6a38bd34b0f68ce35ac9861dbb58e082aa0417a0fb5de5b0cab0abc3db06 not found: ID does not exist" Jan 29 18:04:54 crc kubenswrapper[4886]: I0129 18:04:54.625942 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" path="/var/lib/kubelet/pods/ace6b3f5-2f50-4320-87db-40229f5f2cfa/volumes" Jan 29 18:04:58 crc kubenswrapper[4886]: I0129 18:04:58.622229 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:04:58 crc kubenswrapper[4886]: E0129 18:04:58.622766 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.072025 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fb894ff6d-w7s26_b87936a5-19e1-4a58-948f-1f569c08bb6b/barbican-api/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.277547 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fb894ff6d-w7s26_b87936a5-19e1-4a58-948f-1f569c08bb6b/barbican-api-log/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.339196 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85cc5d579d-jhqqd_054e527c-8ce1-4d03-8fef-0430934daba3/barbican-keystone-listener-log/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.366508 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85cc5d579d-jhqqd_054e527c-8ce1-4d03-8fef-0430934daba3/barbican-keystone-listener/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.542781 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f4657cb95-4tfvc_8f83894a-73ec-405a-bdd2-2044b3f9140a/barbican-worker-log/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.552953 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f4657cb95-4tfvc_8f83894a-73ec-405a-bdd2-2044b3f9140a/barbican-worker/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.734742 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23f9894b-5940-4f78-9062-719f7e7eca3a/ceilometer-central-agent/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.735085 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23f9894b-5940-4f78-9062-719f7e7eca3a/ceilometer-notification-agent/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.789562 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23f9894b-5940-4f78-9062-719f7e7eca3a/sg-core/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.790845 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_23f9894b-5940-4f78-9062-719f7e7eca3a/proxy-httpd/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.975167 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3573eaa4-4c27-4747-a691-15ae61d152f3/cinder-api/0.log" Jan 29 18:05:07 crc kubenswrapper[4886]: I0129 18:05:07.987213 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3573eaa4-4c27-4747-a691-15ae61d152f3/cinder-api-log/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.157248 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d9b55479-5ea1-4a5b-9e34-e83313b04dec/cinder-scheduler/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.232359 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-fh86h_efe27968-ef82-463a-8852-222528e7980d/init/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.259451 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d9b55479-5ea1-4a5b-9e34-e83313b04dec/probe/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.449359 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-fh86h_efe27968-ef82-463a-8852-222528e7980d/init/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.489183 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-fh86h_efe27968-ef82-463a-8852-222528e7980d/dnsmasq-dns/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.516596 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2dbf03ea-9df9-4f03-aee9-113dabed1c7a/glance-httpd/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.610829 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2dbf03ea-9df9-4f03-aee9-113dabed1c7a/glance-log/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.720899 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_81437be4-b399-40e9-9c33-e71319326af8/glance-httpd/0.log" Jan 29 18:05:08 crc kubenswrapper[4886]: I0129 18:05:08.740557 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_81437be4-b399-40e9-9c33-e71319326af8/glance-log/0.log" Jan 29 18:05:09 crc kubenswrapper[4886]: I0129 18:05:09.441011 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5f6fd667fd-4s5hk_3b8fde91-2520-41c6-bc79-1f6b186dcbf0/heat-engine/0.log" Jan 29 18:05:09 crc kubenswrapper[4886]: I0129 18:05:09.616349 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7c65449fdf-42rxg_c5fcdcf3-c18b-4f0b-ac46-7be1d56fc3a2/heat-cfnapi/0.log" Jan 29 18:05:09 crc kubenswrapper[4886]: I0129 18:05:09.682789 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-64bb5bfdfc-h2mgd_a004f05d-8133-4d8e-9e3c-d5c9411351ad/heat-api/0.log" Jan 29 18:05:09 crc kubenswrapper[4886]: I0129 18:05:09.833508 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5499bdc9-q6hr4_d9e327b0-6e20-4b1d-a18f-64b8b49ef36d/keystone-api/0.log" Jan 29 18:05:09 crc kubenswrapper[4886]: I0129 18:05:09.896413 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29495161-tqptf_62fe5584-12c8-4933-868d-bbb9e04f7bb3/keystone-cron/0.log" Jan 29 18:05:10 crc kubenswrapper[4886]: I0129 18:05:10.122220 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_fa42ea64-73bc-439c-802c-65ef65a39015/kube-state-metrics/0.log" Jan 29 18:05:10 crc kubenswrapper[4886]: I0129 18:05:10.390832 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_aa7423ef-f68a-4969-a81b-fd2ce4dbc16a/mysqld-exporter/0.log" Jan 29 18:05:10 crc kubenswrapper[4886]: I0129 18:05:10.813146 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-846d49f49c-kc98b_344feff6-8139-425e-b7dc-f35fe5b17247/neutron-api/0.log" Jan 29 18:05:10 crc kubenswrapper[4886]: I0129 18:05:10.827157 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-846d49f49c-kc98b_344feff6-8139-425e-b7dc-f35fe5b17247/neutron-httpd/0.log" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.178642 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cbffe358-e916-4693-b76d-09fd332a7082/nova-api-log/0.log" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.277125 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bb22403c-016a-48ea-954a-b7b14ea77d7f/nova-cell0-conductor-conductor/0.log" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.460730 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cbffe358-e916-4693-b76d-09fd332a7082/nova-api-api/0.log" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.518795 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_08160d2e-8072-4d08-9dd2-4b5f256b6d9d/nova-cell1-conductor-conductor/0.log" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.614741 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:05:11 crc kubenswrapper[4886]: E0129 18:05:11.615076 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.862491 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c2249ae5-133d-4750-9d7a-529dc8c9b39a/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 18:05:11 crc kubenswrapper[4886]: I0129 18:05:11.935308 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9a568175-84cc-425a-9adf-5013a7fb5171/nova-metadata-log/0.log" Jan 29 18:05:12 crc kubenswrapper[4886]: I0129 18:05:12.232512 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_fc4c563c-21d3-41cf-aabf-dd4429d59b62/nova-scheduler-scheduler/0.log" Jan 29 18:05:12 crc kubenswrapper[4886]: I0129 18:05:12.349671 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_954d7d1e-fd92-4c83-87d8-87a1f866dbbe/mysql-bootstrap/0.log" Jan 29 18:05:12 crc kubenswrapper[4886]: I0129 18:05:12.696982 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_954d7d1e-fd92-4c83-87d8-87a1f866dbbe/mysql-bootstrap/0.log" Jan 29 18:05:12 crc kubenswrapper[4886]: I0129 18:05:12.744872 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_954d7d1e-fd92-4c83-87d8-87a1f866dbbe/galera/0.log" Jan 29 18:05:12 crc kubenswrapper[4886]: I0129 18:05:12.937272 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_98bed306-aa68-4e53-affc-e04497079ccb/mysql-bootstrap/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.160220 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_98bed306-aa68-4e53-affc-e04497079ccb/mysql-bootstrap/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.160694 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_98bed306-aa68-4e53-affc-e04497079ccb/galera/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.378583 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_be43aab6-3888-4260-a85c-147e2ae0a36d/openstackclient/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.451243 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-b7d9p_544b4515-481c-47f1-acb6-ed332a3497d4/ovn-controller/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.559416 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9a568175-84cc-425a-9adf-5013a7fb5171/nova-metadata-metadata/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.734630 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6f8zt_ff160c34-86ad-4048-9c67-2071e6c38373/openstack-network-exporter/0.log" Jan 29 18:05:13 crc kubenswrapper[4886]: I0129 18:05:13.791375 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xhds2_03dc141f-69cc-4cb4-af0b-acf85642b86e/ovsdb-server-init/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.024348 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xhds2_03dc141f-69cc-4cb4-af0b-acf85642b86e/ovsdb-server-init/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.035950 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xhds2_03dc141f-69cc-4cb4-af0b-acf85642b86e/ovs-vswitchd/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.068699 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xhds2_03dc141f-69cc-4cb4-af0b-acf85642b86e/ovsdb-server/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.325739 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc04c928-b93c-49a3-a653-f82b5e686da5/ovn-northd/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.337695 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc04c928-b93c-49a3-a653-f82b5e686da5/openstack-network-exporter/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.370411 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_39601bb5-f2bc-47a6-824a-609c207b963f/openstack-network-exporter/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.564070 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_39601bb5-f2bc-47a6-824a-609c207b963f/ovsdbserver-nb/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.642005 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7b015d0c-8672-450a-a079-965cc4ccd07f/openstack-network-exporter/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.667533 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7b015d0c-8672-450a-a079-965cc4ccd07f/ovsdbserver-sb/0.log" Jan 29 18:05:14 crc kubenswrapper[4886]: I0129 18:05:14.897265 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-795d8c76d8-x2zqv_7e13d48e-3469-4f76-8bae-ab1a21556f5a/placement-api/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.258960 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b3a2d6b-4eb5-44a2-837b-cfbe63f07107/init-config-reloader/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.279836 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-795d8c76d8-x2zqv_7e13d48e-3469-4f76-8bae-ab1a21556f5a/placement-log/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.418154 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b3a2d6b-4eb5-44a2-837b-cfbe63f07107/init-config-reloader/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.600357 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b3a2d6b-4eb5-44a2-837b-cfbe63f07107/config-reloader/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.780447 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b3a2d6b-4eb5-44a2-837b-cfbe63f07107/prometheus/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.837268 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8b3a2d6b-4eb5-44a2-837b-cfbe63f07107/thanos-sidecar/0.log" Jan 29 18:05:15 crc kubenswrapper[4886]: I0129 18:05:15.961954 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9d0db9ae-746b-419a-bc61-bf85645d2bff/setup-container/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.166125 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9d0db9ae-746b-419a-bc61-bf85645d2bff/setup-container/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.216072 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2b0be43b-8956-45aa-ad50-de9183b3fea3/setup-container/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.313744 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9d0db9ae-746b-419a-bc61-bf85645d2bff/rabbitmq/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.415422 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2b0be43b-8956-45aa-ad50-de9183b3fea3/setup-container/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.531212 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10/setup-container/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.653660 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2b0be43b-8956-45aa-ad50-de9183b3fea3/rabbitmq/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.754393 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10/setup-container/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.795128 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_49ed84c4-2bd9-4fb8-88fe-5bd9fe537a10/rabbitmq/0.log" Jan 29 18:05:16 crc kubenswrapper[4886]: I0129 18:05:16.914654 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_842bfe4d-04ba-4143-9076-3033163c7b82/setup-container/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.095964 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_842bfe4d-04ba-4143-9076-3033163c7b82/setup-container/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.133701 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_842bfe4d-04ba-4143-9076-3033163c7b82/rabbitmq/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.347195 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f458794ff-v7p92_79c81ef9-65c7-4372-9a47-8ed93521eadf/proxy-httpd/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.414799 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f458794ff-v7p92_79c81ef9-65c7-4372-9a47-8ed93521eadf/proxy-server/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.424778 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-s7294_ebccb3a0-d421-4c30-9201-43e9106e4006/swift-ring-rebalance/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.684132 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/account-auditor/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.685474 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/account-reaper/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.809069 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/account-replicator/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.869052 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/account-server/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.962355 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/container-auditor/0.log" Jan 29 18:05:17 crc kubenswrapper[4886]: I0129 18:05:17.972476 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/container-replicator/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.037960 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/container-server/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.157610 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/container-updater/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.207915 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/object-auditor/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.261682 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/object-expirer/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.286401 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/object-replicator/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.354705 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/object-server/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.480241 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/swift-recon-cron/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.480434 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/rsync/0.log" Jan 29 18:05:18 crc kubenswrapper[4886]: I0129 18:05:18.494762 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6e2f2c6c-bc32-4a32-ba2c-8954d277ce47/object-updater/0.log" Jan 29 18:05:23 crc kubenswrapper[4886]: I0129 18:05:23.616213 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:05:23 crc kubenswrapper[4886]: E0129 18:05:23.618111 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:05:24 crc kubenswrapper[4886]: I0129 18:05:24.076532 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_88c8ef15-a2b1-41df-8048-752b56d26653/memcached/0.log" Jan 29 18:05:37 crc kubenswrapper[4886]: I0129 18:05:37.614980 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:05:38 crc kubenswrapper[4886]: I0129 18:05:38.717362 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"a8607a4ceafc19dc29f39e1c49905b447674d1829f5c41ef929e075c395f9df6"} Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.120536 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/util/0.log" Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.298067 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/pull/0.log" Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.305316 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/util/0.log" Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.400343 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/pull/0.log" Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.725425 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/util/0.log" Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.730409 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/extract/0.log" Jan 29 18:05:47 crc kubenswrapper[4886]: I0129 18:05:47.735471 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_39139fddf92796f15e1bf79fe958390e5d16e6c9136394aea75c727c23pvldp_c5eb87e5-9a66-4bf3-8348-1dc03c7e0e8e/pull/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.027440 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-w6qc6_4e16e340-e213-492a-9c93-851df7b1bddb/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.052233 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-2g2cz_3ffc5e8b-7f7a-4585-b43d-07e2589493c9/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.150685 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-rhxnz_d01e417c-a1b0-445d-83eb-f3c21a492138/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.413168 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-qf2xg_3c56c53e-a292-4e75-b069-c1d06ceeb6c5/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.465024 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-pfw9c_02decfa9-69fb-46b5-8b30-30954e39d411/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.527619 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-4mmm8_81b8c703-d895-41ce-8ca3-99fd6b6eecb6/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.780500 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-77z62_10cac00e-0cd8-4d53-a4dd-3f6b5200e7e0/manager/0.log" Jan 29 18:05:48 crc kubenswrapper[4886]: I0129 18:05:48.884925 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-t5n28_f2898e34-e423-4576-a765-3919510dcd85/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.011083 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-kwr4n_67107e9f-cf09-4d35-af26-c77f4d76083a/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.126317 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-zpgq2_70336809-8231-4ed9-a912-8b668aaa53bb/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.329937 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-c4j5s_4c2d29a3-d017-4e76-9a82-02943a6b38bf/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.470930 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-9zqmc_053a2790-370f-44bd-a2c0-603ffb22ed3c/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.648843 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-dxcgn_c3cbde0f-6b5d-47cf-93e6-3d2e12051aba/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.740416 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-8gq2g_7b52b050-b925-4562-8682-693917b7899c/manager/0.log" Jan 29 18:05:49 crc kubenswrapper[4886]: I0129 18:05:49.852489 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dqmkhh_c2b6285c-ada4-43f6-8716-53b2afa13723/manager/0.log" Jan 29 18:05:50 crc kubenswrapper[4886]: I0129 18:05:50.090345 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-86bf76f8cb-r9sbf_d4b791b8-523f-4cf0-9ec7-9283c2fd4dde/operator/0.log" Jan 29 18:05:50 crc kubenswrapper[4886]: I0129 18:05:50.333885 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ddcl7_9b2b35ba-9f49-4dd6-816d-6acc4e54e514/registry-server/0.log" Jan 29 18:05:50 crc kubenswrapper[4886]: I0129 18:05:50.573688 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-xnccq_14d9257b-94ae-4b29-b45a-403e034535d3/manager/0.log" Jan 29 18:05:50 crc kubenswrapper[4886]: I0129 18:05:50.971369 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-xt9wq_53042ed9-d676-4bb4-bf7b-9b3520aafd12/manager/0.log" Jan 29 18:05:51 crc kubenswrapper[4886]: I0129 18:05:51.039750 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-546c7b8b6d-hngs4_037bf2ff-dd50-4d62-a525-5304c088cbc0/manager/0.log" Jan 29 18:05:51 crc kubenswrapper[4886]: I0129 18:05:51.126605 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ffdr9_165231a4-c627-484b-9aab-b4ce3feafe7e/operator/0.log" Jan 29 18:05:51 crc kubenswrapper[4886]: I0129 18:05:51.200132 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-cmfj2_608c459b-5b47-478a-9e3a-d83d935ae7c7/manager/0.log" Jan 29 18:05:51 crc kubenswrapper[4886]: I0129 18:05:51.611480 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-hf95f_cbfeb105-c5ee-408e-aac9-e4128e58f0e3/manager/0.log" Jan 29 18:05:51 crc kubenswrapper[4886]: I0129 18:05:51.682022 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-xnrxl_6a145dac-4d02-493c-9bd8-2f9652fcb1d1/manager/0.log" Jan 29 18:05:51 crc kubenswrapper[4886]: I0129 18:05:51.798074 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75495fd598-2hpj4_7db85474-4c59-4db6-ab4a-51092ebd5c62/manager/0.log" Jan 29 18:06:14 crc kubenswrapper[4886]: I0129 18:06:14.059351 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l5v6d_009f91e7-865b-400a-a879-4985c84b321c/control-plane-machine-set-operator/0.log" Jan 29 18:06:14 crc kubenswrapper[4886]: I0129 18:06:14.200401 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fgmg6_3510e180-be29-469c-bfa0-b06702f80c93/kube-rbac-proxy/0.log" Jan 29 18:06:14 crc kubenswrapper[4886]: I0129 18:06:14.262174 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fgmg6_3510e180-be29-469c-bfa0-b06702f80c93/machine-api-operator/0.log" Jan 29 18:06:29 crc kubenswrapper[4886]: I0129 18:06:29.651258 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-n8tt2_0eee9f11-c5ff-490b-a5ea-7a62ef8f0a0a/cert-manager-controller/0.log" Jan 29 18:06:29 crc kubenswrapper[4886]: I0129 18:06:29.863222 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-bqffj_f883321e-6f99-4c0d-89ea-377fec9d166c/cert-manager-cainjector/0.log" Jan 29 18:06:30 crc kubenswrapper[4886]: I0129 18:06:30.014213 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sd87l_a80a9fce-17df-45c6-b123-f3060469c1c9/cert-manager-webhook/0.log" Jan 29 18:06:45 crc kubenswrapper[4886]: I0129 18:06:45.266950 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-d4tp4_2814fca3-5ea5-4b77-aad5-0308881c88bb/nmstate-console-plugin/0.log" Jan 29 18:06:45 crc kubenswrapper[4886]: I0129 18:06:45.454170 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9lh4n_848b9df5-c882-4017-b1ad-6ac496646a76/nmstate-handler/0.log" Jan 29 18:06:45 crc kubenswrapper[4886]: I0129 18:06:45.496737 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ntx9m_515c481a-e563-41c3-b5ff-d5957faf5217/kube-rbac-proxy/0.log" Jan 29 18:06:45 crc kubenswrapper[4886]: I0129 18:06:45.577146 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ntx9m_515c481a-e563-41c3-b5ff-d5957faf5217/nmstate-metrics/0.log" Jan 29 18:06:45 crc kubenswrapper[4886]: I0129 18:06:45.659948 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xn5zh_64313301-3779-4923-949f-b8de5c30b5bb/nmstate-operator/0.log" Jan 29 18:06:45 crc kubenswrapper[4886]: I0129 18:06:45.786664 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-mv5wp_c42903b0-c0d4-4c39-bed3-3c9d083e753d/nmstate-webhook/0.log" Jan 29 18:07:01 crc kubenswrapper[4886]: I0129 18:07:01.116055 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b44bcdc44-bgqfw_994fe9e1-7adf-4aab-bc9e-d51fd52286a9/kube-rbac-proxy/0.log" Jan 29 18:07:01 crc kubenswrapper[4886]: I0129 18:07:01.136502 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b44bcdc44-bgqfw_994fe9e1-7adf-4aab-bc9e-d51fd52286a9/manager/0.log" Jan 29 18:07:15 crc kubenswrapper[4886]: I0129 18:07:15.273393 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-72k5z_1151b336-be43-4e43-959d-463c956e9bc4/prometheus-operator/0.log" Jan 29 18:07:15 crc kubenswrapper[4886]: I0129 18:07:15.475359 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9_e2e7310d-6390-4a0d-b0bd-f8467c80517c/prometheus-operator-admission-webhook/0.log" Jan 29 18:07:15 crc kubenswrapper[4886]: I0129 18:07:15.526640 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5_e1472730-ce1e-4333-a6c6-930196b9d257/prometheus-operator-admission-webhook/0.log" Jan 29 18:07:15 crc kubenswrapper[4886]: I0129 18:07:15.665264 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-w5qml_17549a68-0567-40f8-9dda-37cd61f71b94/operator/0.log" Jan 29 18:07:15 crc kubenswrapper[4886]: I0129 18:07:15.708054 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ld46c_ee1da890-a690-46b4-95aa-3f282b3cdc30/observability-ui-dashboards/0.log" Jan 29 18:07:15 crc kubenswrapper[4886]: I0129 18:07:15.839557 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-dtcpm_d2a26d31-689d-4052-9df2-1654feb68c2d/perses-operator/0.log" Jan 29 18:07:31 crc kubenswrapper[4886]: I0129 18:07:31.923181 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-hgdlt_7f5851a1-d10c-445d-bffc-12a6acc01ead/cluster-logging-operator/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.167737 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-qnmmn_bd8dc819-215b-44f5-b758-9bac32be60f5/collector/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.295448 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_37c313cd-31f0-4fb3-9241-a3a59b1f55a6/loki-compactor/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.369258 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-2jzzb_befd63fe-2ae3-4bb3-86fd-ac5486d7fbd1/loki-distributor/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.519422 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8587c9555d-cszl5_c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b/gateway/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.528482 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8587c9555d-cszl5_c39a9c6b-a3a0-4337-9c29-5fa3c161ef0b/opa/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.689002 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8587c9555d-m4k69_046307bd-2e5e-4d92-b934-57ed8882d1bc/gateway/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.783226 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8587c9555d-m4k69_046307bd-2e5e-4d92-b934-57ed8882d1bc/opa/0.log" Jan 29 18:07:32 crc kubenswrapper[4886]: I0129 18:07:32.809886 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_6059a5a7-5b65-481d-9b0f-f40d863e8310/loki-index-gateway/0.log" Jan 29 18:07:33 crc kubenswrapper[4886]: I0129 18:07:33.046771 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_0dd1a523-96c1-4311-9452-92e6da8a7e9b/loki-ingester/0.log" Jan 29 18:07:33 crc kubenswrapper[4886]: I0129 18:07:33.074394 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-85zgx_fb80c257-3e6a-45c8-bb6f-6fb2676ef296/loki-querier/0.log" Jan 29 18:07:33 crc kubenswrapper[4886]: I0129 18:07:33.278647 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-9q2lr_fa3af54b-5759-4b53-a998-720bd2ff4608/loki-query-frontend/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.037779 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tlnpb_946b39e6-3f42-4aff-a197-f29de26c175a/kube-rbac-proxy/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.184305 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tlnpb_946b39e6-3f42-4aff-a197-f29de26c175a/controller/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.241739 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-frr-files/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.504128 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-metrics/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.506224 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-reloader/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.536817 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-frr-files/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.587171 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-reloader/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.748194 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-reloader/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.794223 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-frr-files/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.818898 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-metrics/0.log" Jan 29 18:07:49 crc kubenswrapper[4886]: I0129 18:07:49.828780 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-metrics/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.015875 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-reloader/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.022777 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-metrics/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.030534 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/cp-frr-files/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.038646 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/controller/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.237664 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/kube-rbac-proxy/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.239142 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/frr-metrics/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.279447 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/kube-rbac-proxy-frr/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.478380 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/reloader/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.592017 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-x455w_cf3feb5c-d348-4c0a-95c7-46f18db4687c/frr-k8s-webhook-server/0.log" Jan 29 18:07:50 crc kubenswrapper[4886]: I0129 18:07:50.995018 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-96d4668dd-sb2zt_a88b1900-1763-4d6c-9b3a-62598ab57eda/webhook-server/0.log" Jan 29 18:07:51 crc kubenswrapper[4886]: I0129 18:07:51.030076 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-77cfddbbb9-wbb7k_dc960811-7f19-4248-8d44-e3ffcb98d650/manager/0.log" Jan 29 18:07:51 crc kubenswrapper[4886]: I0129 18:07:51.220669 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bmwgt_5fe12a1b-277f-429e-a6b8-a874ec6e4918/kube-rbac-proxy/0.log" Jan 29 18:07:51 crc kubenswrapper[4886]: I0129 18:07:51.799455 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bmwgt_5fe12a1b-277f-429e-a6b8-a874ec6e4918/speaker/0.log" Jan 29 18:07:52 crc kubenswrapper[4886]: I0129 18:07:52.110703 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4pt6_daa4e7b8-3078-4fd1-bb04-5185fa474080/frr/0.log" Jan 29 18:07:59 crc kubenswrapper[4886]: I0129 18:07:59.660877 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:07:59 crc kubenswrapper[4886]: I0129 18:07:59.661368 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:08:05 crc kubenswrapper[4886]: I0129 18:08:05.904367 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/util/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.105496 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/util/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.143991 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/pull/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.147540 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/pull/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.353034 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/util/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.391244 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/extract/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.414603 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2fbd5_aa613edd-15e0-466f-8739-ab30f6d61801/pull/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.566130 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/util/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.701120 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/util/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.718901 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/pull/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.753970 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/pull/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.932323 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/util/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.933280 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/pull/0.log" Jan 29 18:08:06 crc kubenswrapper[4886]: I0129 18:08:06.940615 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jqprn_1a97d794-d0ac-4ad5-ae34-d81a8bf7d5e7/extract/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.116910 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/extract-utilities/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.309683 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/extract-utilities/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.309913 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/extract-content/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.353354 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/extract-content/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.496395 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/extract-content/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.533902 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/extract-utilities/0.log" Jan 29 18:08:07 crc kubenswrapper[4886]: I0129 18:08:07.709255 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/extract-utilities/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.032490 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/extract-content/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.059040 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/extract-utilities/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.082759 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/extract-content/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.233222 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ws2lm_d8ab6536-f9ab-4191-9c15-f3fe0453e7d0/registry-server/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.275267 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/extract-content/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.311186 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/extract-utilities/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.462087 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-m8snn_9cb13d4a-3940-45ef-9135-ff94c6a75b0c/marketplace-operator/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.654976 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/extract-utilities/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.941514 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/extract-content/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.978002 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/extract-utilities/0.log" Jan 29 18:08:08 crc kubenswrapper[4886]: I0129 18:08:08.992603 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vnttp_fbfc768f-4803-4f4e-9019-2aacda68bc47/registry-server/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.021748 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/extract-content/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.261578 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/extract-utilities/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.284477 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/extract-content/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.460599 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-52bfx_87b65e80-b30f-4ac4-bb06-ec8eb04cd7ca/registry-server/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.474008 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/extract-utilities/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.672239 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/extract-utilities/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.712048 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/extract-content/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.732662 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/extract-content/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.888223 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/extract-content/0.log" Jan 29 18:08:09 crc kubenswrapper[4886]: I0129 18:08:09.949181 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/extract-utilities/0.log" Jan 29 18:08:10 crc kubenswrapper[4886]: I0129 18:08:10.511014 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6bdhs_80e49770-fa31-4780-a5ac-38a6bc1221a9/registry-server/0.log" Jan 29 18:08:25 crc kubenswrapper[4886]: I0129 18:08:25.674222 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78f4cbbdd9-hrhb5_e1472730-ce1e-4333-a6c6-930196b9d257/prometheus-operator-admission-webhook/0.log" Jan 29 18:08:25 crc kubenswrapper[4886]: I0129 18:08:25.697564 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-72k5z_1151b336-be43-4e43-959d-463c956e9bc4/prometheus-operator/0.log" Jan 29 18:08:25 crc kubenswrapper[4886]: I0129 18:08:25.717232 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78f4cbbdd9-75xq9_e2e7310d-6390-4a0d-b0bd-f8467c80517c/prometheus-operator-admission-webhook/0.log" Jan 29 18:08:26 crc kubenswrapper[4886]: I0129 18:08:26.065439 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ld46c_ee1da890-a690-46b4-95aa-3f282b3cdc30/observability-ui-dashboards/0.log" Jan 29 18:08:26 crc kubenswrapper[4886]: I0129 18:08:26.113169 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-w5qml_17549a68-0567-40f8-9dda-37cd61f71b94/operator/0.log" Jan 29 18:08:26 crc kubenswrapper[4886]: I0129 18:08:26.187501 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-dtcpm_d2a26d31-689d-4052-9df2-1654feb68c2d/perses-operator/0.log" Jan 29 18:08:29 crc kubenswrapper[4886]: I0129 18:08:29.660785 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:08:29 crc kubenswrapper[4886]: I0129 18:08:29.661471 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:08:41 crc kubenswrapper[4886]: I0129 18:08:41.856318 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b44bcdc44-bgqfw_994fe9e1-7adf-4aab-bc9e-d51fd52286a9/manager/0.log" Jan 29 18:08:41 crc kubenswrapper[4886]: I0129 18:08:41.905761 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b44bcdc44-bgqfw_994fe9e1-7adf-4aab-bc9e-d51fd52286a9/kube-rbac-proxy/0.log" Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.660546 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.660977 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.661025 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.661549 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8607a4ceafc19dc29f39e1c49905b447674d1829f5c41ef929e075c395f9df6"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.661592 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://a8607a4ceafc19dc29f39e1c49905b447674d1829f5c41ef929e075c395f9df6" gracePeriod=600 Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.858207 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="a8607a4ceafc19dc29f39e1c49905b447674d1829f5c41ef929e075c395f9df6" exitCode=0 Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.858303 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"a8607a4ceafc19dc29f39e1c49905b447674d1829f5c41ef929e075c395f9df6"} Jan 29 18:08:59 crc kubenswrapper[4886]: I0129 18:08:59.858616 4886 scope.go:117] "RemoveContainer" containerID="d68f7ec6ceb9d5c0ab55fbdd924d4866f80618e90c6f48af98c7c175db4cf62a" Jan 29 18:09:00 crc kubenswrapper[4886]: I0129 18:09:00.887951 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerStarted","Data":"b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b"} Jan 29 18:09:56 crc kubenswrapper[4886]: I0129 18:09:56.916208 4886 scope.go:117] "RemoveContainer" containerID="9151f75a515b793b76d61e304966261ea994214c86da5ff66a0d5a788f6197a1" Jan 29 18:10:19 crc kubenswrapper[4886]: I0129 18:10:19.996795 4886 generic.go:334] "Generic (PLEG): container finished" podID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerID="2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71" exitCode=0 Jan 29 18:10:19 crc kubenswrapper[4886]: I0129 18:10:19.996874 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lsq2b/must-gather-jss9f" event={"ID":"fd01fd0d-8339-41ba-be01-6c3b723b2ec9","Type":"ContainerDied","Data":"2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71"} Jan 29 18:10:19 crc kubenswrapper[4886]: I0129 18:10:19.998759 4886 scope.go:117] "RemoveContainer" containerID="2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71" Jan 29 18:10:20 crc kubenswrapper[4886]: I0129 18:10:20.416736 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lsq2b_must-gather-jss9f_fd01fd0d-8339-41ba-be01-6c3b723b2ec9/gather/0.log" Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.077362 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lsq2b/must-gather-jss9f"] Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.078174 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lsq2b/must-gather-jss9f" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="copy" containerID="cri-o://941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480" gracePeriod=2 Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.085307 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lsq2b/must-gather-jss9f"] Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.616491 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lsq2b_must-gather-jss9f_fd01fd0d-8339-41ba-be01-6c3b723b2ec9/copy/0.log" Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.630226 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.755964 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-must-gather-output\") pod \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.756032 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv85m\" (UniqueName: \"kubernetes.io/projected/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-kube-api-access-lv85m\") pod \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\" (UID: \"fd01fd0d-8339-41ba-be01-6c3b723b2ec9\") " Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.764574 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-kube-api-access-lv85m" (OuterVolumeSpecName: "kube-api-access-lv85m") pod "fd01fd0d-8339-41ba-be01-6c3b723b2ec9" (UID: "fd01fd0d-8339-41ba-be01-6c3b723b2ec9"). InnerVolumeSpecName "kube-api-access-lv85m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.859157 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv85m\" (UniqueName: \"kubernetes.io/projected/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-kube-api-access-lv85m\") on node \"crc\" DevicePath \"\"" Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.949842 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fd01fd0d-8339-41ba-be01-6c3b723b2ec9" (UID: "fd01fd0d-8339-41ba-be01-6c3b723b2ec9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:10:28 crc kubenswrapper[4886]: I0129 18:10:28.960364 4886 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd01fd0d-8339-41ba-be01-6c3b723b2ec9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.096072 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lsq2b_must-gather-jss9f_fd01fd0d-8339-41ba-be01-6c3b723b2ec9/copy/0.log" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.097247 4886 generic.go:334] "Generic (PLEG): container finished" podID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerID="941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480" exitCode=143 Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.097307 4886 scope.go:117] "RemoveContainer" containerID="941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.097443 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lsq2b/must-gather-jss9f" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.123746 4886 scope.go:117] "RemoveContainer" containerID="2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.189383 4886 scope.go:117] "RemoveContainer" containerID="941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480" Jan 29 18:10:29 crc kubenswrapper[4886]: E0129 18:10:29.190082 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480\": container with ID starting with 941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480 not found: ID does not exist" containerID="941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.190131 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480"} err="failed to get container status \"941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480\": rpc error: code = NotFound desc = could not find container \"941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480\": container with ID starting with 941c9f11cb71ba19e856bc997a9757714af5c5ee6eb22fb06be9c6d2f5939480 not found: ID does not exist" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.190158 4886 scope.go:117] "RemoveContainer" containerID="2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71" Jan 29 18:10:29 crc kubenswrapper[4886]: E0129 18:10:29.190604 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71\": container with ID starting with 2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71 not found: ID does not exist" containerID="2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71" Jan 29 18:10:29 crc kubenswrapper[4886]: I0129 18:10:29.190667 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71"} err="failed to get container status \"2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71\": rpc error: code = NotFound desc = could not find container \"2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71\": container with ID starting with 2738216c87f4889a48f2223f13ba05e092ed8aee10ab356bb6e1bc6a50ac2a71 not found: ID does not exist" Jan 29 18:10:30 crc kubenswrapper[4886]: I0129 18:10:30.652448 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" path="/var/lib/kubelet/pods/fd01fd0d-8339-41ba-be01-6c3b723b2ec9/volumes" Jan 29 18:11:29 crc kubenswrapper[4886]: I0129 18:11:29.660620 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:11:29 crc kubenswrapper[4886]: I0129 18:11:29.661171 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.244307 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xm9wv"] Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249562 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="registry-server" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249586 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="registry-server" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249612 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="copy" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249620 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="copy" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249642 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="extract-utilities" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249651 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="extract-utilities" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249669 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="registry-server" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249678 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="registry-server" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249696 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="extract-content" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249704 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="extract-content" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249715 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="gather" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249722 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="gather" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249738 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="extract-utilities" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249747 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="extract-utilities" Jan 29 18:11:38 crc kubenswrapper[4886]: E0129 18:11:38.249771 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="extract-content" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.249780 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="extract-content" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.250665 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="gather" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.250695 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace6b3f5-2f50-4320-87db-40229f5f2cfa" containerName="registry-server" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.250731 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd01fd0d-8339-41ba-be01-6c3b723b2ec9" containerName="copy" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.250745 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf73c735-d3aa-476b-9390-6a150d51a290" containerName="registry-server" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.254436 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.263721 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xm9wv"] Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.354156 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-utilities\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.354417 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-catalog-content\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.354463 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctt9\" (UniqueName: \"kubernetes.io/projected/e1a40aab-9df6-46b7-ae77-30f27474304d-kube-api-access-qctt9\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.456784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-catalog-content\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.456863 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctt9\" (UniqueName: \"kubernetes.io/projected/e1a40aab-9df6-46b7-ae77-30f27474304d-kube-api-access-qctt9\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.456982 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-utilities\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.457414 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-utilities\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.457560 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-catalog-content\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.493587 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctt9\" (UniqueName: \"kubernetes.io/projected/e1a40aab-9df6-46b7-ae77-30f27474304d-kube-api-access-qctt9\") pod \"community-operators-xm9wv\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:38 crc kubenswrapper[4886]: I0129 18:11:38.591732 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:11:39 crc kubenswrapper[4886]: I0129 18:11:39.189876 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xm9wv"] Jan 29 18:11:40 crc kubenswrapper[4886]: I0129 18:11:40.054889 4886 generic.go:334] "Generic (PLEG): container finished" podID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerID="73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd" exitCode=0 Jan 29 18:11:40 crc kubenswrapper[4886]: I0129 18:11:40.055006 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerDied","Data":"73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd"} Jan 29 18:11:40 crc kubenswrapper[4886]: I0129 18:11:40.055278 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerStarted","Data":"95030769d112ca044ae55f690f43571603a8addbf7e4c6fa08a67c0409685a38"} Jan 29 18:11:40 crc kubenswrapper[4886]: I0129 18:11:40.058557 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 18:11:40 crc kubenswrapper[4886]: E0129 18:11:40.222666 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 18:11:40 crc kubenswrapper[4886]: E0129 18:11:40.222913 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qctt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xm9wv_openshift-marketplace(e1a40aab-9df6-46b7-ae77-30f27474304d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 18:11:40 crc kubenswrapper[4886]: E0129 18:11:40.224227 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:11:41 crc kubenswrapper[4886]: E0129 18:11:41.070347 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:11:52 crc kubenswrapper[4886]: E0129 18:11:52.761776 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 18:11:52 crc kubenswrapper[4886]: E0129 18:11:52.762248 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qctt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xm9wv_openshift-marketplace(e1a40aab-9df6-46b7-ae77-30f27474304d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 18:11:52 crc kubenswrapper[4886]: E0129 18:11:52.763415 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:11:59 crc kubenswrapper[4886]: I0129 18:11:59.661594 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:11:59 crc kubenswrapper[4886]: I0129 18:11:59.662299 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:12:07 crc kubenswrapper[4886]: E0129 18:12:07.622743 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:12:18 crc kubenswrapper[4886]: E0129 18:12:18.769713 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 18:12:18 crc kubenswrapper[4886]: E0129 18:12:18.770779 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qctt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xm9wv_openshift-marketplace(e1a40aab-9df6-46b7-ae77-30f27474304d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 18:12:18 crc kubenswrapper[4886]: E0129 18:12:18.772081 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:12:29 crc kubenswrapper[4886]: I0129 18:12:29.661088 4886 patch_prober.go:28] interesting pod/machine-config-daemon-gx4vp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 18:12:29 crc kubenswrapper[4886]: I0129 18:12:29.661726 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 18:12:29 crc kubenswrapper[4886]: I0129 18:12:29.661785 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" Jan 29 18:12:29 crc kubenswrapper[4886]: I0129 18:12:29.662732 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b"} pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 18:12:29 crc kubenswrapper[4886]: I0129 18:12:29.662835 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerName="machine-config-daemon" containerID="cri-o://b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" gracePeriod=600 Jan 29 18:12:29 crc kubenswrapper[4886]: E0129 18:12:29.787838 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:12:30 crc kubenswrapper[4886]: I0129 18:12:30.704278 4886 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" exitCode=0 Jan 29 18:12:30 crc kubenswrapper[4886]: I0129 18:12:30.704355 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" event={"ID":"5a5d8fc0-7aa5-431a-9add-9bdcc6d20091","Type":"ContainerDied","Data":"b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b"} Jan 29 18:12:30 crc kubenswrapper[4886]: I0129 18:12:30.704647 4886 scope.go:117] "RemoveContainer" containerID="a8607a4ceafc19dc29f39e1c49905b447674d1829f5c41ef929e075c395f9df6" Jan 29 18:12:30 crc kubenswrapper[4886]: I0129 18:12:30.705944 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:12:30 crc kubenswrapper[4886]: E0129 18:12:30.706654 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:12:32 crc kubenswrapper[4886]: E0129 18:12:32.618213 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:12:41 crc kubenswrapper[4886]: I0129 18:12:41.616303 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:12:41 crc kubenswrapper[4886]: E0129 18:12:41.617660 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:12:43 crc kubenswrapper[4886]: E0129 18:12:43.619106 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:12:55 crc kubenswrapper[4886]: E0129 18:12:55.620004 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:12:56 crc kubenswrapper[4886]: I0129 18:12:56.616629 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:12:56 crc kubenswrapper[4886]: E0129 18:12:56.617870 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:13:08 crc kubenswrapper[4886]: I0129 18:13:08.630575 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:13:08 crc kubenswrapper[4886]: E0129 18:13:08.631699 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:13:08 crc kubenswrapper[4886]: E0129 18:13:08.766448 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 18:13:08 crc kubenswrapper[4886]: E0129 18:13:08.766733 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qctt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xm9wv_openshift-marketplace(e1a40aab-9df6-46b7-ae77-30f27474304d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 18:13:08 crc kubenswrapper[4886]: E0129 18:13:08.768096 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:13:19 crc kubenswrapper[4886]: I0129 18:13:19.615667 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:13:19 crc kubenswrapper[4886]: E0129 18:13:19.616692 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:13:19 crc kubenswrapper[4886]: E0129 18:13:19.617791 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:13:34 crc kubenswrapper[4886]: I0129 18:13:34.615292 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:13:34 crc kubenswrapper[4886]: E0129 18:13:34.616642 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:13:34 crc kubenswrapper[4886]: E0129 18:13:34.618374 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:13:45 crc kubenswrapper[4886]: I0129 18:13:45.615977 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:13:45 crc kubenswrapper[4886]: E0129 18:13:45.619749 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:13:46 crc kubenswrapper[4886]: E0129 18:13:46.617227 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:13:57 crc kubenswrapper[4886]: I0129 18:13:57.615739 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:13:57 crc kubenswrapper[4886]: E0129 18:13:57.616539 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:14:01 crc kubenswrapper[4886]: E0129 18:14:01.618777 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:14:10 crc kubenswrapper[4886]: I0129 18:14:10.618717 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:14:10 crc kubenswrapper[4886]: E0129 18:14:10.619794 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:14:15 crc kubenswrapper[4886]: E0129 18:14:15.619765 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.397484 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n562p"] Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.402419 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.412580 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n562p"] Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.532718 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngjw\" (UniqueName: \"kubernetes.io/projected/cab12cac-196d-4567-b193-dbfe7e5dceac-kube-api-access-gngjw\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.533466 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-catalog-content\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.533621 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-utilities\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.649006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngjw\" (UniqueName: \"kubernetes.io/projected/cab12cac-196d-4567-b193-dbfe7e5dceac-kube-api-access-gngjw\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.649273 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-catalog-content\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.649296 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-utilities\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.651372 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-utilities\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.652018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-catalog-content\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.674136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngjw\" (UniqueName: \"kubernetes.io/projected/cab12cac-196d-4567-b193-dbfe7e5dceac-kube-api-access-gngjw\") pod \"redhat-marketplace-n562p\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:22 crc kubenswrapper[4886]: I0129 18:14:22.741059 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:23 crc kubenswrapper[4886]: I0129 18:14:23.271769 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n562p"] Jan 29 18:14:23 crc kubenswrapper[4886]: I0129 18:14:23.615056 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:14:23 crc kubenswrapper[4886]: E0129 18:14:23.615420 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:14:24 crc kubenswrapper[4886]: I0129 18:14:24.248936 4886 generic.go:334] "Generic (PLEG): container finished" podID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerID="293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0" exitCode=0 Jan 29 18:14:24 crc kubenswrapper[4886]: I0129 18:14:24.249025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n562p" event={"ID":"cab12cac-196d-4567-b193-dbfe7e5dceac","Type":"ContainerDied","Data":"293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0"} Jan 29 18:14:24 crc kubenswrapper[4886]: I0129 18:14:24.249283 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n562p" event={"ID":"cab12cac-196d-4567-b193-dbfe7e5dceac","Type":"ContainerStarted","Data":"6bcaff8425100bf86063574e42241279b47ae8277b9085c7d68e2ea73533940c"} Jan 29 18:14:26 crc kubenswrapper[4886]: I0129 18:14:26.269019 4886 generic.go:334] "Generic (PLEG): container finished" podID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerID="46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99" exitCode=0 Jan 29 18:14:26 crc kubenswrapper[4886]: I0129 18:14:26.269085 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n562p" event={"ID":"cab12cac-196d-4567-b193-dbfe7e5dceac","Type":"ContainerDied","Data":"46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99"} Jan 29 18:14:27 crc kubenswrapper[4886]: I0129 18:14:27.283645 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n562p" event={"ID":"cab12cac-196d-4567-b193-dbfe7e5dceac","Type":"ContainerStarted","Data":"b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2"} Jan 29 18:14:27 crc kubenswrapper[4886]: I0129 18:14:27.301030 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n562p" podStartSLOduration=2.856990968 podStartE2EDuration="5.301011937s" podCreationTimestamp="2026-01-29 18:14:22 +0000 UTC" firstStartedPulling="2026-01-29 18:14:24.251237319 +0000 UTC m=+6747.159956611" lastFinishedPulling="2026-01-29 18:14:26.695258268 +0000 UTC m=+6749.603977580" observedRunningTime="2026-01-29 18:14:27.300026349 +0000 UTC m=+6750.208745631" watchObservedRunningTime="2026-01-29 18:14:27.301011937 +0000 UTC m=+6750.209731209" Jan 29 18:14:31 crc kubenswrapper[4886]: I0129 18:14:31.336459 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerStarted","Data":"15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f"} Jan 29 18:14:32 crc kubenswrapper[4886]: I0129 18:14:32.354706 4886 generic.go:334] "Generic (PLEG): container finished" podID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerID="15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f" exitCode=0 Jan 29 18:14:32 crc kubenswrapper[4886]: I0129 18:14:32.354771 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerDied","Data":"15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f"} Jan 29 18:14:32 crc kubenswrapper[4886]: I0129 18:14:32.741834 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:32 crc kubenswrapper[4886]: I0129 18:14:32.741991 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:32 crc kubenswrapper[4886]: I0129 18:14:32.803773 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:33 crc kubenswrapper[4886]: I0129 18:14:33.366049 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerStarted","Data":"66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3"} Jan 29 18:14:33 crc kubenswrapper[4886]: I0129 18:14:33.393174 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xm9wv" podStartSLOduration=2.668651144 podStartE2EDuration="2m55.393156634s" podCreationTimestamp="2026-01-29 18:11:38 +0000 UTC" firstStartedPulling="2026-01-29 18:11:40.057970736 +0000 UTC m=+6582.966690038" lastFinishedPulling="2026-01-29 18:14:32.782476256 +0000 UTC m=+6755.691195528" observedRunningTime="2026-01-29 18:14:33.386404753 +0000 UTC m=+6756.295124055" watchObservedRunningTime="2026-01-29 18:14:33.393156634 +0000 UTC m=+6756.301875906" Jan 29 18:14:33 crc kubenswrapper[4886]: I0129 18:14:33.421475 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:34 crc kubenswrapper[4886]: I0129 18:14:34.610390 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n562p"] Jan 29 18:14:35 crc kubenswrapper[4886]: I0129 18:14:35.389642 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n562p" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="registry-server" containerID="cri-o://b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2" gracePeriod=2 Jan 29 18:14:35 crc kubenswrapper[4886]: I0129 18:14:35.980707 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.116947 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gngjw\" (UniqueName: \"kubernetes.io/projected/cab12cac-196d-4567-b193-dbfe7e5dceac-kube-api-access-gngjw\") pod \"cab12cac-196d-4567-b193-dbfe7e5dceac\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.117047 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-utilities\") pod \"cab12cac-196d-4567-b193-dbfe7e5dceac\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.117407 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-catalog-content\") pod \"cab12cac-196d-4567-b193-dbfe7e5dceac\" (UID: \"cab12cac-196d-4567-b193-dbfe7e5dceac\") " Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.118898 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-utilities" (OuterVolumeSpecName: "utilities") pod "cab12cac-196d-4567-b193-dbfe7e5dceac" (UID: "cab12cac-196d-4567-b193-dbfe7e5dceac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.122000 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab12cac-196d-4567-b193-dbfe7e5dceac-kube-api-access-gngjw" (OuterVolumeSpecName: "kube-api-access-gngjw") pod "cab12cac-196d-4567-b193-dbfe7e5dceac" (UID: "cab12cac-196d-4567-b193-dbfe7e5dceac"). InnerVolumeSpecName "kube-api-access-gngjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.164500 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cab12cac-196d-4567-b193-dbfe7e5dceac" (UID: "cab12cac-196d-4567-b193-dbfe7e5dceac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.220955 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.221009 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gngjw\" (UniqueName: \"kubernetes.io/projected/cab12cac-196d-4567-b193-dbfe7e5dceac-kube-api-access-gngjw\") on node \"crc\" DevicePath \"\"" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.221034 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab12cac-196d-4567-b193-dbfe7e5dceac-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.403502 4886 generic.go:334] "Generic (PLEG): container finished" podID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerID="b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2" exitCode=0 Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.403573 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n562p" event={"ID":"cab12cac-196d-4567-b193-dbfe7e5dceac","Type":"ContainerDied","Data":"b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2"} Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.403610 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n562p" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.403652 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n562p" event={"ID":"cab12cac-196d-4567-b193-dbfe7e5dceac","Type":"ContainerDied","Data":"6bcaff8425100bf86063574e42241279b47ae8277b9085c7d68e2ea73533940c"} Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.403700 4886 scope.go:117] "RemoveContainer" containerID="b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.438249 4886 scope.go:117] "RemoveContainer" containerID="46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.466254 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n562p"] Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.477139 4886 scope.go:117] "RemoveContainer" containerID="293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.483426 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n562p"] Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.543420 4886 scope.go:117] "RemoveContainer" containerID="b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2" Jan 29 18:14:36 crc kubenswrapper[4886]: E0129 18:14:36.543898 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2\": container with ID starting with b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2 not found: ID does not exist" containerID="b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.543957 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2"} err="failed to get container status \"b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2\": rpc error: code = NotFound desc = could not find container \"b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2\": container with ID starting with b6e921c2222268cb755607604fd8c0b3efbdfa3b43c1c202e82106f447a7a6b2 not found: ID does not exist" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.544004 4886 scope.go:117] "RemoveContainer" containerID="46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99" Jan 29 18:14:36 crc kubenswrapper[4886]: E0129 18:14:36.545254 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99\": container with ID starting with 46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99 not found: ID does not exist" containerID="46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.545308 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99"} err="failed to get container status \"46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99\": rpc error: code = NotFound desc = could not find container \"46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99\": container with ID starting with 46e4ac4967e8890f9ea1b048f9f0023744ebfb614ed27aad57d597bc3d686c99 not found: ID does not exist" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.545366 4886 scope.go:117] "RemoveContainer" containerID="293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0" Jan 29 18:14:36 crc kubenswrapper[4886]: E0129 18:14:36.546113 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0\": container with ID starting with 293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0 not found: ID does not exist" containerID="293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.546163 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0"} err="failed to get container status \"293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0\": rpc error: code = NotFound desc = could not find container \"293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0\": container with ID starting with 293d46bccbff7998923fdc0bd1e4d2e70801401dcf85ea823ef008fcadc6dea0 not found: ID does not exist" Jan 29 18:14:36 crc kubenswrapper[4886]: I0129 18:14:36.638290 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" path="/var/lib/kubelet/pods/cab12cac-196d-4567-b193-dbfe7e5dceac/volumes" Jan 29 18:14:37 crc kubenswrapper[4886]: I0129 18:14:37.616568 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:14:37 crc kubenswrapper[4886]: E0129 18:14:37.617166 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:14:38 crc kubenswrapper[4886]: I0129 18:14:38.591921 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:14:38 crc kubenswrapper[4886]: I0129 18:14:38.591991 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:14:38 crc kubenswrapper[4886]: I0129 18:14:38.675917 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:14:39 crc kubenswrapper[4886]: I0129 18:14:39.553453 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:14:40 crc kubenswrapper[4886]: I0129 18:14:40.012647 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xm9wv"] Jan 29 18:14:41 crc kubenswrapper[4886]: I0129 18:14:41.487691 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xm9wv" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="registry-server" containerID="cri-o://66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3" gracePeriod=2 Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.111949 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.177303 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-catalog-content\") pod \"e1a40aab-9df6-46b7-ae77-30f27474304d\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.177639 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-utilities\") pod \"e1a40aab-9df6-46b7-ae77-30f27474304d\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.177885 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qctt9\" (UniqueName: \"kubernetes.io/projected/e1a40aab-9df6-46b7-ae77-30f27474304d-kube-api-access-qctt9\") pod \"e1a40aab-9df6-46b7-ae77-30f27474304d\" (UID: \"e1a40aab-9df6-46b7-ae77-30f27474304d\") " Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.179025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-utilities" (OuterVolumeSpecName: "utilities") pod "e1a40aab-9df6-46b7-ae77-30f27474304d" (UID: "e1a40aab-9df6-46b7-ae77-30f27474304d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.188291 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a40aab-9df6-46b7-ae77-30f27474304d-kube-api-access-qctt9" (OuterVolumeSpecName: "kube-api-access-qctt9") pod "e1a40aab-9df6-46b7-ae77-30f27474304d" (UID: "e1a40aab-9df6-46b7-ae77-30f27474304d"). InnerVolumeSpecName "kube-api-access-qctt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.246689 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a40aab-9df6-46b7-ae77-30f27474304d" (UID: "e1a40aab-9df6-46b7-ae77-30f27474304d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.281348 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.281383 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a40aab-9df6-46b7-ae77-30f27474304d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.281393 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qctt9\" (UniqueName: \"kubernetes.io/projected/e1a40aab-9df6-46b7-ae77-30f27474304d-kube-api-access-qctt9\") on node \"crc\" DevicePath \"\"" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.498681 4886 generic.go:334] "Generic (PLEG): container finished" podID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerID="66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3" exitCode=0 Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.498722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerDied","Data":"66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3"} Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.498733 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm9wv" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.498747 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm9wv" event={"ID":"e1a40aab-9df6-46b7-ae77-30f27474304d","Type":"ContainerDied","Data":"95030769d112ca044ae55f690f43571603a8addbf7e4c6fa08a67c0409685a38"} Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.498763 4886 scope.go:117] "RemoveContainer" containerID="66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.522529 4886 scope.go:117] "RemoveContainer" containerID="15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.542904 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xm9wv"] Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.551928 4886 scope.go:117] "RemoveContainer" containerID="73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.554557 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xm9wv"] Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.602116 4886 scope.go:117] "RemoveContainer" containerID="66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3" Jan 29 18:14:42 crc kubenswrapper[4886]: E0129 18:14:42.602721 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3\": container with ID starting with 66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3 not found: ID does not exist" containerID="66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.602759 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3"} err="failed to get container status \"66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3\": rpc error: code = NotFound desc = could not find container \"66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3\": container with ID starting with 66a49c321d01b60ab3f2c9f17e95bf959950c32860ff7b7351a0c02780afd5b3 not found: ID does not exist" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.602790 4886 scope.go:117] "RemoveContainer" containerID="15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f" Jan 29 18:14:42 crc kubenswrapper[4886]: E0129 18:14:42.603251 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f\": container with ID starting with 15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f not found: ID does not exist" containerID="15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.603301 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f"} err="failed to get container status \"15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f\": rpc error: code = NotFound desc = could not find container \"15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f\": container with ID starting with 15dfacf562334e503bf98f0b143227cfe8c6890ca73ff759622e84fbb0b7592f not found: ID does not exist" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.603368 4886 scope.go:117] "RemoveContainer" containerID="73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd" Jan 29 18:14:42 crc kubenswrapper[4886]: E0129 18:14:42.603768 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd\": container with ID starting with 73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd not found: ID does not exist" containerID="73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.603805 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd"} err="failed to get container status \"73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd\": rpc error: code = NotFound desc = could not find container \"73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd\": container with ID starting with 73a76d9bf9407207bb16286ea217fd9e932d96ee9e61b5e551d230717409c7fd not found: ID does not exist" Jan 29 18:14:42 crc kubenswrapper[4886]: I0129 18:14:42.634464 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" path="/var/lib/kubelet/pods/e1a40aab-9df6-46b7-ae77-30f27474304d/volumes" Jan 29 18:14:50 crc kubenswrapper[4886]: I0129 18:14:50.615773 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:14:50 crc kubenswrapper[4886]: E0129 18:14:50.616995 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.157992 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59"] Jan 29 18:15:00 crc kubenswrapper[4886]: E0129 18:15:00.159241 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="extract-content" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159258 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="extract-content" Jan 29 18:15:00 crc kubenswrapper[4886]: E0129 18:15:00.159287 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="registry-server" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159296 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="registry-server" Jan 29 18:15:00 crc kubenswrapper[4886]: E0129 18:15:00.159313 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="registry-server" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159342 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="registry-server" Jan 29 18:15:00 crc kubenswrapper[4886]: E0129 18:15:00.159380 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="extract-utilities" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159390 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="extract-utilities" Jan 29 18:15:00 crc kubenswrapper[4886]: E0129 18:15:00.159420 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="extract-utilities" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159429 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="extract-utilities" Jan 29 18:15:00 crc kubenswrapper[4886]: E0129 18:15:00.159444 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="extract-content" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159452 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="extract-content" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159735 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a40aab-9df6-46b7-ae77-30f27474304d" containerName="registry-server" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.159777 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab12cac-196d-4567-b193-dbfe7e5dceac" containerName="registry-server" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.160709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.163862 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.164403 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.177635 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59"] Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.267157 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dfd79b6-b491-45ff-9977-93e384a500a7-secret-volume\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.267405 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfd79b6-b491-45ff-9977-93e384a500a7-config-volume\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.267467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58bxl\" (UniqueName: \"kubernetes.io/projected/0dfd79b6-b491-45ff-9977-93e384a500a7-kube-api-access-58bxl\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.369793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfd79b6-b491-45ff-9977-93e384a500a7-config-volume\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.370192 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58bxl\" (UniqueName: \"kubernetes.io/projected/0dfd79b6-b491-45ff-9977-93e384a500a7-kube-api-access-58bxl\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.370523 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dfd79b6-b491-45ff-9977-93e384a500a7-secret-volume\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.371322 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfd79b6-b491-45ff-9977-93e384a500a7-config-volume\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.384298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dfd79b6-b491-45ff-9977-93e384a500a7-secret-volume\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.395088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58bxl\" (UniqueName: \"kubernetes.io/projected/0dfd79b6-b491-45ff-9977-93e384a500a7-kube-api-access-58bxl\") pod \"collect-profiles-29495175-dxl59\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:00 crc kubenswrapper[4886]: I0129 18:15:00.493925 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:01 crc kubenswrapper[4886]: I0129 18:15:00.998784 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59"] Jan 29 18:15:01 crc kubenswrapper[4886]: I0129 18:15:01.740243 4886 generic.go:334] "Generic (PLEG): container finished" podID="0dfd79b6-b491-45ff-9977-93e384a500a7" containerID="46ec392561fcd2b377c433df08c45dc60e9a2f469268a1ca308bb44ae0fd25e0" exitCode=0 Jan 29 18:15:01 crc kubenswrapper[4886]: I0129 18:15:01.740337 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" event={"ID":"0dfd79b6-b491-45ff-9977-93e384a500a7","Type":"ContainerDied","Data":"46ec392561fcd2b377c433df08c45dc60e9a2f469268a1ca308bb44ae0fd25e0"} Jan 29 18:15:01 crc kubenswrapper[4886]: I0129 18:15:01.740550 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" event={"ID":"0dfd79b6-b491-45ff-9977-93e384a500a7","Type":"ContainerStarted","Data":"24ba151c4962237189e61a655c72587607c861741938cbac67db9cbfa17ad60d"} Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.253348 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.447552 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfd79b6-b491-45ff-9977-93e384a500a7-config-volume\") pod \"0dfd79b6-b491-45ff-9977-93e384a500a7\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.447914 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dfd79b6-b491-45ff-9977-93e384a500a7-secret-volume\") pod \"0dfd79b6-b491-45ff-9977-93e384a500a7\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.448143 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58bxl\" (UniqueName: \"kubernetes.io/projected/0dfd79b6-b491-45ff-9977-93e384a500a7-kube-api-access-58bxl\") pod \"0dfd79b6-b491-45ff-9977-93e384a500a7\" (UID: \"0dfd79b6-b491-45ff-9977-93e384a500a7\") " Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.448833 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dfd79b6-b491-45ff-9977-93e384a500a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0dfd79b6-b491-45ff-9977-93e384a500a7" (UID: "0dfd79b6-b491-45ff-9977-93e384a500a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.452497 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfd79b6-b491-45ff-9977-93e384a500a7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.454927 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfd79b6-b491-45ff-9977-93e384a500a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0dfd79b6-b491-45ff-9977-93e384a500a7" (UID: "0dfd79b6-b491-45ff-9977-93e384a500a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.457458 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfd79b6-b491-45ff-9977-93e384a500a7-kube-api-access-58bxl" (OuterVolumeSpecName: "kube-api-access-58bxl") pod "0dfd79b6-b491-45ff-9977-93e384a500a7" (UID: "0dfd79b6-b491-45ff-9977-93e384a500a7"). InnerVolumeSpecName "kube-api-access-58bxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.555098 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58bxl\" (UniqueName: \"kubernetes.io/projected/0dfd79b6-b491-45ff-9977-93e384a500a7-kube-api-access-58bxl\") on node \"crc\" DevicePath \"\"" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.555377 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dfd79b6-b491-45ff-9977-93e384a500a7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.615106 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:15:03 crc kubenswrapper[4886]: E0129 18:15:03.615491 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.768479 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" event={"ID":"0dfd79b6-b491-45ff-9977-93e384a500a7","Type":"ContainerDied","Data":"24ba151c4962237189e61a655c72587607c861741938cbac67db9cbfa17ad60d"} Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.768525 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ba151c4962237189e61a655c72587607c861741938cbac67db9cbfa17ad60d" Jan 29 18:15:03 crc kubenswrapper[4886]: I0129 18:15:03.768578 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495175-dxl59" Jan 29 18:15:04 crc kubenswrapper[4886]: I0129 18:15:04.324043 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55"] Jan 29 18:15:04 crc kubenswrapper[4886]: I0129 18:15:04.334698 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-cdv55"] Jan 29 18:15:04 crc kubenswrapper[4886]: I0129 18:15:04.626351 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281" path="/var/lib/kubelet/pods/d0fd5fcc-58d6-4d14-a68b-0c10e4dc5281/volumes" Jan 29 18:15:17 crc kubenswrapper[4886]: I0129 18:15:17.615467 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:15:17 crc kubenswrapper[4886]: E0129 18:15:17.616427 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.009016 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8x4lm"] Jan 29 18:15:21 crc kubenswrapper[4886]: E0129 18:15:21.010499 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfd79b6-b491-45ff-9977-93e384a500a7" containerName="collect-profiles" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.010517 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfd79b6-b491-45ff-9977-93e384a500a7" containerName="collect-profiles" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.010772 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfd79b6-b491-45ff-9977-93e384a500a7" containerName="collect-profiles" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.013054 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.022149 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8x4lm"] Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.075355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-utilities\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.075448 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-catalog-content\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.075519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5zm\" (UniqueName: \"kubernetes.io/projected/c23f34ba-7d01-4793-85de-7d1cecfcfd89-kube-api-access-9n5zm\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.178245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-utilities\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.178318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-catalog-content\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.178390 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5zm\" (UniqueName: \"kubernetes.io/projected/c23f34ba-7d01-4793-85de-7d1cecfcfd89-kube-api-access-9n5zm\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.178817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-utilities\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.178904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-catalog-content\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.205656 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5zm\" (UniqueName: \"kubernetes.io/projected/c23f34ba-7d01-4793-85de-7d1cecfcfd89-kube-api-access-9n5zm\") pod \"redhat-operators-8x4lm\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.347484 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.868549 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8x4lm"] Jan 29 18:15:21 crc kubenswrapper[4886]: I0129 18:15:21.997573 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerStarted","Data":"3a7d98625554d3cac706804f00e5be38dcbb162664ca97b0c5839e0488d0d6e3"} Jan 29 18:15:23 crc kubenswrapper[4886]: I0129 18:15:23.015238 4886 generic.go:334] "Generic (PLEG): container finished" podID="c23f34ba-7d01-4793-85de-7d1cecfcfd89" containerID="159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72" exitCode=0 Jan 29 18:15:23 crc kubenswrapper[4886]: I0129 18:15:23.015473 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerDied","Data":"159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72"} Jan 29 18:15:24 crc kubenswrapper[4886]: I0129 18:15:24.034180 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerStarted","Data":"385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815"} Jan 29 18:15:29 crc kubenswrapper[4886]: I0129 18:15:29.093344 4886 generic.go:334] "Generic (PLEG): container finished" podID="c23f34ba-7d01-4793-85de-7d1cecfcfd89" containerID="385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815" exitCode=0 Jan 29 18:15:29 crc kubenswrapper[4886]: I0129 18:15:29.093391 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerDied","Data":"385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815"} Jan 29 18:15:30 crc kubenswrapper[4886]: I0129 18:15:30.112666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerStarted","Data":"fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2"} Jan 29 18:15:30 crc kubenswrapper[4886]: I0129 18:15:30.150418 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8x4lm" podStartSLOduration=3.59326439 podStartE2EDuration="10.150391429s" podCreationTimestamp="2026-01-29 18:15:20 +0000 UTC" firstStartedPulling="2026-01-29 18:15:23.017621923 +0000 UTC m=+6805.926341205" lastFinishedPulling="2026-01-29 18:15:29.574748942 +0000 UTC m=+6812.483468244" observedRunningTime="2026-01-29 18:15:30.136039692 +0000 UTC m=+6813.044759004" watchObservedRunningTime="2026-01-29 18:15:30.150391429 +0000 UTC m=+6813.059110741" Jan 29 18:15:31 crc kubenswrapper[4886]: I0129 18:15:31.348110 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:31 crc kubenswrapper[4886]: I0129 18:15:31.348560 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:31 crc kubenswrapper[4886]: I0129 18:15:31.615048 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:15:31 crc kubenswrapper[4886]: E0129 18:15:31.615386 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:15:32 crc kubenswrapper[4886]: I0129 18:15:32.495030 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8x4lm" podUID="c23f34ba-7d01-4793-85de-7d1cecfcfd89" containerName="registry-server" probeResult="failure" output=< Jan 29 18:15:32 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Jan 29 18:15:32 crc kubenswrapper[4886]: > Jan 29 18:15:41 crc kubenswrapper[4886]: I0129 18:15:41.407775 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:41 crc kubenswrapper[4886]: I0129 18:15:41.478124 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:41 crc kubenswrapper[4886]: I0129 18:15:41.654206 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8x4lm"] Jan 29 18:15:43 crc kubenswrapper[4886]: I0129 18:15:43.282068 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8x4lm" podUID="c23f34ba-7d01-4793-85de-7d1cecfcfd89" containerName="registry-server" containerID="cri-o://fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2" gracePeriod=2 Jan 29 18:15:43 crc kubenswrapper[4886]: I0129 18:15:43.851755 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.002126 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-catalog-content\") pod \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.002500 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5zm\" (UniqueName: \"kubernetes.io/projected/c23f34ba-7d01-4793-85de-7d1cecfcfd89-kube-api-access-9n5zm\") pod \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.002617 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-utilities\") pod \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\" (UID: \"c23f34ba-7d01-4793-85de-7d1cecfcfd89\") " Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.004392 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-utilities" (OuterVolumeSpecName: "utilities") pod "c23f34ba-7d01-4793-85de-7d1cecfcfd89" (UID: "c23f34ba-7d01-4793-85de-7d1cecfcfd89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.028674 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23f34ba-7d01-4793-85de-7d1cecfcfd89-kube-api-access-9n5zm" (OuterVolumeSpecName: "kube-api-access-9n5zm") pod "c23f34ba-7d01-4793-85de-7d1cecfcfd89" (UID: "c23f34ba-7d01-4793-85de-7d1cecfcfd89"). InnerVolumeSpecName "kube-api-access-9n5zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.105689 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.105720 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5zm\" (UniqueName: \"kubernetes.io/projected/c23f34ba-7d01-4793-85de-7d1cecfcfd89-kube-api-access-9n5zm\") on node \"crc\" DevicePath \"\"" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.146782 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c23f34ba-7d01-4793-85de-7d1cecfcfd89" (UID: "c23f34ba-7d01-4793-85de-7d1cecfcfd89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.207884 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23f34ba-7d01-4793-85de-7d1cecfcfd89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.296241 4886 generic.go:334] "Generic (PLEG): container finished" podID="c23f34ba-7d01-4793-85de-7d1cecfcfd89" containerID="fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2" exitCode=0 Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.296277 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerDied","Data":"fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2"} Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.296308 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x4lm" event={"ID":"c23f34ba-7d01-4793-85de-7d1cecfcfd89","Type":"ContainerDied","Data":"3a7d98625554d3cac706804f00e5be38dcbb162664ca97b0c5839e0488d0d6e3"} Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.296343 4886 scope.go:117] "RemoveContainer" containerID="fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.296384 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x4lm" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.351461 4886 scope.go:117] "RemoveContainer" containerID="385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.361861 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8x4lm"] Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.388525 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8x4lm"] Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.388851 4886 scope.go:117] "RemoveContainer" containerID="159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.441356 4886 scope.go:117] "RemoveContainer" containerID="fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2" Jan 29 18:15:44 crc kubenswrapper[4886]: E0129 18:15:44.441755 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2\": container with ID starting with fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2 not found: ID does not exist" containerID="fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.441783 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2"} err="failed to get container status \"fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2\": rpc error: code = NotFound desc = could not find container \"fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2\": container with ID starting with fabd372238950cdaa18aa497284d0f0db5e1eadfa5c30b8c1ca62969f03c7ae2 not found: ID does not exist" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.441801 4886 scope.go:117] "RemoveContainer" containerID="385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815" Jan 29 18:15:44 crc kubenswrapper[4886]: E0129 18:15:44.442313 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815\": container with ID starting with 385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815 not found: ID does not exist" containerID="385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.442343 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815"} err="failed to get container status \"385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815\": rpc error: code = NotFound desc = could not find container \"385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815\": container with ID starting with 385b78cd5b959e5ed39ba705ed1dbdacd0d3fe904a297951d7779db2ed426815 not found: ID does not exist" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.442355 4886 scope.go:117] "RemoveContainer" containerID="159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72" Jan 29 18:15:44 crc kubenswrapper[4886]: E0129 18:15:44.442949 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72\": container with ID starting with 159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72 not found: ID does not exist" containerID="159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.443014 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72"} err="failed to get container status \"159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72\": rpc error: code = NotFound desc = could not find container \"159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72\": container with ID starting with 159880fb7797e3eb5c8a3430fee5383494c98b722c378515cf30130e1a4baf72 not found: ID does not exist" Jan 29 18:15:44 crc kubenswrapper[4886]: I0129 18:15:44.627194 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23f34ba-7d01-4793-85de-7d1cecfcfd89" path="/var/lib/kubelet/pods/c23f34ba-7d01-4793-85de-7d1cecfcfd89/volumes" Jan 29 18:15:46 crc kubenswrapper[4886]: I0129 18:15:46.619997 4886 scope.go:117] "RemoveContainer" containerID="b900b9c884451219b68e72739d460e4d06900b18f10f7003c7040961c812bb7b" Jan 29 18:15:46 crc kubenswrapper[4886]: E0129 18:15:46.621142 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gx4vp_openshift-machine-config-operator(5a5d8fc0-7aa5-431a-9add-9bdcc6d20091)\"" pod="openshift-machine-config-operator/machine-config-daemon-gx4vp" podUID="5a5d8fc0-7aa5-431a-9add-9bdcc6d20091" Jan 29 18:15:57 crc kubenswrapper[4886]: I0129 18:15:57.189912 4886 scope.go:117] "RemoveContainer" containerID="e970dea6a6e8251fa9ff24484a3f5ffaee4ce0d2fad251a5d786e848db7373be"